<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Artificial Intelligence &amp; Algorithms | RENOR &amp; Partners S.r.l.</title>
	<atom:link href="https://renor.it/en/blog/artificial-intelligence-algorithms/feed/" rel="self" type="application/rss+xml" />
	<link>https://renor.it/en/blog/artificial-intelligence-algorithms/</link>
	<description></description>
	<lastBuildDate>Mon, 09 Mar 2026 21:04:59 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9.4</generator>
	<item>
		<title>My Interview on “Industry Experts”: AI and Ethics</title>
		<link>https://renor.it/en/blog/my-interview-on-industry-experts-ai-and-ethics/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Mon, 09 Mar 2026 21:04:57 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[Blog]]></category>
		<category><![CDATA[AI Ethics]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Artificial Intelligence Ethics]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[interview]]></category>
		<category><![CDATA[simone renzi]]></category>
		<guid isPermaLink="false">https://renor.it/?p=2914</guid>

					<description><![CDATA[<p>I was recently invited as a guest on “Industry Experts,” a program focused on entrepreneurs and companies that discusses topics of great interest. My main contribution, within my professional context, focused on Artificial Intelligence. In the debate surrounding AI, there is a tendency to attribute an artificial personhood to it, using pronouns such as “she” [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/my-interview-on-industry-experts-ai-and-ethics/">My Interview on “Industry Experts”: AI and Ethics</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I was recently invited as a guest on “Industry Experts,” a program focused on entrepreneurs and companies that discusses topics of great interest. My main contribution, within my professional context, focused on Artificial Intelligence.</p>

<p>In the debate surrounding AI, there is a tendency to attribute an artificial personhood to it, using pronouns such as “she” or “he.” I myself sometimes fall into this mistake during discussions. In my view, the reason for this lapse is mainly linked to the complexity of the concept of intelligence. Even though it is followed by the adjective “artificial,” the word itself tends to evoke, in the human mind, an association with the human figure, because intelligence has traditionally been considered a distinctive characteristic of human beings.</p>

<p>Speaking about AI as if it were human creates both a lexical and cognitive problem, as it can lead to harmful consequences: it may cause the brain to perceive AI-generated text as part of a conscious line of reasoning. It is therefore essential, before using these technologies—especially for younger people—to distinguish between what is produced by an algorithm based on statistical evaluations over hundreds of billions of data points and what is instead the product of human experience, shaped by intuition and consciousness.</p>

<h2 class="wp-block-heading" id="h-l-ai-nell-ambiente-lavorativo">AI in the Workplace</h2>

<p>An algorithm cannot replicate human consciousness, because it arises from feelings and reflections generated by lived experiences.</p>

<p>Being human means knowing pain, which guides us toward greater empathy. Imagine a person who has just been laid off and is ready to tell their story.</p>

<p><p class="p1">The response of an artificial intelligence system might be something like this:<b></b></p><br/><p class="p3"><i>“I’m sorry that you lost your job. It’s a difficult situation. You might consider updating your résumé, looking for new opportunities online, and evaluating training courses to improve your skills.”</i></p></p>

<p id="h-l-ai-nell-ambiente-lavorativo">The response is accurate and rational, but it lacks empathy for the human situation.</p>

<p><p class="p1"><b>The response of an artificial intelligence system might be something like this:</b><b></b></p><br/><p class="p3"><i>“I’m sorry that you lost your job. It’s a difficult situation. You might consider updating your résumé, searching for new opportunities online, and evaluating training courses to improve your skills.”</i></p></p>

<p>In this context, typically human behaviors can be observed: the recognition of emotion as a shared experience, and the emotional exchange that encourages the listener to resonate with the state of mind of the person who has been laid off. There is a relational presence that offers availability: “we can think about it together,” “if I can, I’ll give you a hand.” Instead of immediate technical assistance, there is an emotional exchange, mutual understanding, and a solution proposed after a period of reflection.</p>

<p>Human empathy goes beyond simple logical understanding; it includes a component of emotional resonance.</p>

<p><p class="p1">An even stronger example could be this:</p><br/><p class="p1">“Today I sold my father’s violin.”</p></p>

<p>A possible response from an AI might be: “If you tell me the model and brand of the violin and the selling price, I can tell you whether you made a good deal.”</p>

<p>A possible response from a human might be: “But the violin your father always played, the one you carefully kept in that display case? It must have been very difficult for you to part with it; that violin surely carried many memories.”</p>

<p>Here it becomes even more evident that the interlocutor’s attention is not focused on the object itself, but on what lies behind it: memories, emotions, and lived experiences.</p>

<p>Be careful: depending on the model, AI might also produce a similar response. What does this mean? It is essential to remember that AI models are based on information provided by human beings. In this situation, it could emulate human behavior, but it is important to clarify that it is not actually human. It would simply provide the response that is statistically most probable for a human to give to such a statement, yet once again devoid of any real emotion.</p>

<h2 class="wp-block-heading" id="h-perche-molte-persone-si-aprono-con-un-modello-ai">Why do many people “open up” to an AI model?</h2>

<p>There are numerous accounts of individuals who reveal that they share their thoughts and feelings with AI models more often than with real people. This situation is concerning, but it has a plausible explanation.</p>

<p>To make valid comparisons, AI should not be compared with extreme negative cases, but rather with rational human reference models. In a situation of need, relying on artificial intelligence might be safer than relying on a murderer (the fair of banality), but fortunately murderers are few and the world is still populated by honest people.</p>

<p>When we think about young people who suffer from school bullying, it is easy to understand why they might feel comfortable opening up to AI, because they find a form of support that:</p>

<ul class="wp-block-list">
<li>Does not blame them</li>



<li>Does not bully them</li>



<li>Does not make fun of them</li>



<li>Gives them seemingly rational advice</li>



<li>It is instantly accessible 24/7.</li>
</ul>

<p>Although effective at processing text, the system is not capable of understanding and integrating emotions, operating exclusively on textual data. In social interaction, gestures reveal a great deal about people’s emotions. AI, due to its limited understanding of nonverbal language, struggles to build real relationships. Tone also plays a fundamental role.</p>

<h2 class="wp-block-heading" id="h-il-lato-oscuro-delle-persone">The Dark Side of People</h2>

<p>Very often, when I am contacted by companies, the common question is this: “Engineer, where can I integrate Artificial Intelligence to save on personnel costs?”… Translated… Where can I implement AI to lay off some people and put more money in my own pocket?</p>

<p><p class="p1">I cannot help but describe this entrepreneurial vision as <strong>decidedly short-sighted!</strong></p></p>

<p>The integration of AI aims at productivity and speed, not at laying off employees. With a solid financial situation and available liquidity, what benefit comes from using AI to reduce the workforce?</p>

<p>It would be like saying I have a car, I install a more powerful engine but remove the brakes because my only goal is to accelerate faster. Fine—but when you find a wall in front of you, what do you do? Do you ask it to move?</p>

<p>The integration of AI to reduce staff makes sense only in one specific case: when a company is going through a critical period, with declining work, growing debts, and an inability to meet its financial obligations. To avoid closure, staff reductions may be carried out in an attempt to contain costs and preserve some jobs by trying to replace part of the workforce with AI.</p>

<p>However, this is an exception—an emergency situation, not the norm! It is like saying: “Let’s save what we can by laying off 10 people before it is too late and 100 people lose their jobs.”</p>

<p>The problem is that the companies asking me these kinds of questions often do not suffer from a lack of liquidity, nor do they experience any difficulty in maintaining their current level of employment.</p>

<p>A complex ethical question therefore arises for the entrepreneur and for the company tasked with developing these tools.</p>

<p>A legal framework on this matter would be essential to guarantee employees greater peace of mind when facing the introduction of artificial intelligence as an ally in the workplace, and it would ensure continuity in employment levels while avoiding the classic ethical conflicts that arise between client companies and software houses. Because, dear readers… Einstein once said that “it is easier to split an atom than a prejudice.” Personally, every time I find myself facing proposals of this kind, I do my utmost to convince the person in front of me of the short-sightedness of such a vision: “use this personnel in more strategic areas, perhaps in marketing to increase demand.” But let’s be honest—many entrepreneurs focus solely on short-term profit, ignoring the ethical implications of their decisions and overlooking the long-term consequences.</p>

<p><p class="p1">On the other hand, an effort is also required from employees. As the saying goes, we must strike a balance… A permanent contract should not be seen as a university degree—“I’ve graduated, now I’m set for life, no one can take this piece of paper away from me.” Many employees, to use a euphemism, tend to “relax” once they are converted to a permanent contract. Some accumulate sick leave after sick leave even while in perfect health. Others arrive at the office and simply warm the chair, doing the bare minimum.</p><br/><p class="p1">Of course, when I speak with entrepreneurs about maintaining employment levels, I stand up for diligent and virtuous workers, not for those who merely occupy a seat.</p></p>

<p>In this interview, we touch on some of these points. Enjoy watching.</p>

<iframe title="Intelligenza artificiale nelle aziende" width="500" height="375" src="https://www.youtube.com/embed/g3kUBG8UGpY?feature=oembed" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>

<p></p>
<p>L'articolo <a href="https://renor.it/en/blog/my-interview-on-industry-experts-ai-and-ethics/">My Interview on “Industry Experts”: AI and Ethics</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Will AI make people lose jobs?</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/will-ai-make-people-lose-jobs/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Tue, 17 Jun 2025 19:52:02 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[adaptation]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI in the workplace]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[digital skills]]></category>
		<category><![CDATA[digital transformation]]></category>
		<category><![CDATA[digital transition]]></category>
		<category><![CDATA[evolution of work]]></category>
		<category><![CDATA[fear of change]]></category>
		<category><![CDATA[Future of work]]></category>
		<category><![CDATA[human creativity]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[meritocracy]]></category>
		<category><![CDATA[new professions]]></category>
		<category><![CDATA[professional change]]></category>
		<category><![CDATA[reskilling]]></category>
		<category><![CDATA[robots and work]]></category>
		<category><![CDATA[technological revolution]]></category>
		<category><![CDATA[transformation of trades]]></category>
		<category><![CDATA[upskilling]]></category>
		<category><![CDATA[work and AI]]></category>
		<category><![CDATA[work and technology]]></category>
		<guid isPermaLink="false">https://renor.it/will-ai-make-people-lose-jobs/</guid>

					<description><![CDATA[<p>Every technological revolution has changed the rules of the game. But those who have been able to adapt have always found a new role to play. Collective anxiety fueled by the media Catastrophic headlines and sensationalism: “AI will replace us all” In the public imagination, Artificial Intelligence is often described as a dark force poised [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/will-ai-make-people-lose-jobs/">Will AI make people lose jobs?</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Every technological revolution has changed the rules of the game. But those who have been able to adapt have always found a new role to play.   </p>

<h2 class="wp-block-heading">Collective anxiety fueled by the media</h2>

<h3 class="wp-block-heading">Catastrophic headlines and sensationalism: “AI will replace us all”</h3>

<p>In the public imagination, Artificial Intelligence is often described as a dark force poised to wipe out millions of jobs. This perception is fueled by alarmist headlines that leverage panic and uncertainty rather than data reality. “AI will replace 80 percent of jobs,” “Goodbye employees: AI will make them all useless,” “Will we work only 3 days a week or be unemployed for life?”  </p>

<p>These are some of the headlines populating newspapers, social media and blogs, generating a widespread sense of social anxiety.  </p>

<p>The problem is not the emphasis with which they are presented, but the total and utter <strong>analytical vacuum</strong> that accompanies them. These articles never distinguish sectors, roles that can be automated and those that cannot, activities that will be assisted and those that will actually be eliminated. The result of this media catastrophe is a message of fear: either you adapt or you disappear. In reality, the truth is much more nuanced and less radical.     </p>

<p>This propagandistic approach used only to generate clicks reminds me of other historical times when a new technology was demonized out of ignorance or defense of the status quo. This was seen with computers, with the Internet, with social media, with industrial automation. AI is no exception: it is interpreted not as a potential tool, but as a hostile, autonomous agent acting to “steal” something from the human being.    </p>

<p>In such a context, it is crucial to recover a lucid and informed vision, capable of distinguishing between assumptions, fears and hard data. Only with knowledge is it possible to overcome collective hysteria and face impending change with strategic intelligence rather than irrational and unfounded panic.   </p>

<h3 class="wp-block-heading">Confusion between automation and extinction</h3>

<p>In my opinion, one of the most common distortions in the various debates on Artificial Intelligence concerns the confusion between the automation of activities and the extinction of professions. This is a gross conceptual error that amplifies unfounded fears and paralyzes critical and strategic thinking.   </p>

<p>The reality is much more articulated: AI does not sistitute trades but <strong>transforms</strong> tasks within trades. I realize that the sentence may not be clear to everyone so let&#8217;s give an example.   <br />The job of the journalist&#8230; AI can automate the writing of standardized articles (such as sports reports or financial bulletins), but it cannot and, in all likelihood, will never replace editorial sensibility, critical inquiry, the ability to construct an investigation, ask relevant questions in an interview, or interpret a cultural context.   </p>

<p>This applies not only to publishing, but to a multitude of other fields. AI can <strong>assist</strong>, <strong>optimize</strong>, and <strong>speed up</strong> many of the phases of work, but this does not imply that the entire profession is undone. On the contrary, data show that in many cases, the presence of AI <strong>creates new functions</strong>, new responsibilities, and new hybrid roles.    </p>

<p>To confuse automation with the erasure of a profession is like saying that the invention of the washing machine <strong>erased</strong> the trade of those who washed clothes by hand. Today we can say that this is not the case; it has freed up time and resources, allowing people to devote themselves to something else: to education, creativity, study or more value-added activities. <br />The real crux then is not loss, but <strong>transformation</strong>: a process that requires adaptation, continuous training, and open-mindedness. AI will only take meaning away from those roles that <strong>refuse to evolve</strong>, not those that accept the challenge of change by using AI to their advantage.    </p>

<h2 class="wp-block-heading">AI scares those who shouldn&#8217;t be there: the end of undeserved roles?</h2>

<p>A little-discussed aspect of the public debate is that AI really scares only two categories of workers: those who are too vertical, but especially <strong>those who are too weak on merit</strong> and who are protected by non-transparent dynamics. That is, specialists who are incapable of upgrading outside their niche because they have not developed a mind capable of juggling cross-cutting contexts, and especially people who occupy roles for which they have no real skills, but have found themselves there by recommendation or kinship. Yes, especially these people would do well to start worrying!  </p>

<p>In Italy, a country still strongly anchored to logics of titles, seniority and positioning rather than real value, AI is becoming an uncomfortable mirror. Not because it demeans humans, but because it exposes the functional uselessness of many roles. If 30 seconds with ChatGPT is all it takes to get a document that would take some offices 3 days and 6 signatures, it begs the question: was that position really needed?  </p>

<p>In this light, the fear of AI is not fear of change, but fear of transparency. For the first time, a technology is able to measure (very often in real time), the value produced versus time spent, operational redundancy and actual contribution.   </p>

<p>Perhaps paradoxically, the advent of artificial intelligence represents an opportunity to initiate, even indirectly, a process of <strong>natural meritocracy</strong>. Not the one imposed by decree, but the one that emerges when the system stops tolerating the useless because there is an objectively more efficient alternative.   </p>

<p>Therefore, the fear is justified only in one sense: <strong>if your role exists only because you are unable to be able to do anything else and are unable to reinvent yourself, you do not generate added value and objectively represent a burden because you do not devote yourself to your work with passion and dedication but only to make ends meet and take your salary.</strong></p>

<p>Those who are creative cannot be afraid of something that by definition is not creative. AI does not have insights; it merely performs tasks as it has been trained to do them. </p>

<h2 class="wp-block-heading">The productivity paradox: we fear what should lift us up</h2>

<p>One of the most glaring contradictions in the Artificial Intelligence debate is the fact that many workers seem to fear the very thing that, in theory, should relieve them.  </p>

<p>Technological innovations have always aimed to optimize time, reduce errors, and automate processes. Yet in the face of AI this logic is reversed. If efficiency was once desirable, it now becomes a threat. But why?   </p>

<p>Because in many organizations, both public and private, real productivity has never been a variable in the equation. People work or pretend to work to fill schedules, defend roles, maintain balance between weak skills and repetitive tasks. In these contexts, the arrival of a system that can do in 5 minutes what it takes a team 3 days to deliver is perceived not as liberation, but as existential danger.    </p>

<p>The real problem, then, is not AI itself; it is that AI challenges the value of activities whose value was already questionable. Automation makes visible the absurdity of entire business processes built on slowness, unnecessary intermediation, and repetition for its own sake.   </p>

<p>But there is also an even deeper issue: productivity creates empty space, and emptiness, culturally, is scary. Those who work in companies that do not reward individual initiative, creativity and strategic thinking wonder, “If AI frees up 3 hours a day for me&#8230;. What will I do with that time? Will I be valued for what I can create or for what I no longer have to do?”   </p>

<p>In this ambiguity lies the real paradox: <strong>the technology that could finally allow humans to focus on what matters is experienced as an attack on survival.</strong> But fose, the problem is not the technology but the cultural model that has accustomed us to working to exist rather than to produce value.   </p>

<h2 class="wp-block-heading">Similar historical cases</h2>

<h3 class="wp-block-heading">When ecommerce came along, it was said, “It&#8217;s the end of stores.”</h3>

<p>The introduction of ecommerce generated, at the time, a wave of panic much like the one that accompanies Artificial Intelligence today. There were fears that physical stores would close en masse, that real shopping would become obsolete, and that entire industries, from retail to logistics to the real estate market of store rentals, would collapse.   </p>

<p>Yet, in hindsight, we know today that this was not the case&#8230;. Ecommerce did not destroy traditional commerce: <strong>it forced it to evolve</strong>. Many small stores started selling online, malls integrated omnichannel strategy, big brands invested in hybrid platforms. Click&amp;collect, live commerce, digital drive-in was born. The shopping experience is not dead, it has transformed into phygital experience i.e., a physical and digital experience together.      </p>

<p>In fact, what happened is exactly what is happening with AI today: those who resisted were left behind; those who adapted their model thrived. The retailer who saw ecommerce as a threat closed. The one that saw it as an opportunity, an extension of its service, gained new market share. This evidence teaches us that each technology does not erase the existing but reshapes the competitive environment. It is not technology that kills a business, but the inability to adapt to the new paradigm.      </p>

<h3 class="wp-block-heading">Industrialization and the end of artisans?</h3>

<p>In the late 19th century, when the first steam engines and mechanical looms began to replace the manual labor of artisans, people cried out in scandal: “This is the death of the dignity of labor,” said many. And there was no shortage of concrete reasons: those who had spent a lifetime working with wood, iron, fabric with skill and dedication suddenly saw their craft reduced to a repetitive, impersonal, automated process.   </p>

<p>But again, the apocalyptic prophecy proved inaccurate. Industrialization did not kill human labor, it multiplied it. <br />New figures, new specializations, new professional hierarchies were born. Handicraft work did not disappear: it changed roles, scaled down but did not lose value. Even today we can buy a jacket from a famous brand and pay 300 euros. Getting it made to measure by a craftsman, choosing the fabric, the buttons, the style, the workmanship can cost 3,000 euros!   <br />Thus, there is still a context today in which uniqueness and quality are much more important parameters than quantity.  </p>

<p>That industrial transition was one of the most powerful levers of economic and social growth in history. It enabled the emergence of the working classes, urbanization, union rights, the very concept of regulated working hours. Without that transition, there would not have been modern protections, mass access to consumption, or the ability to work outside an agricultural or feudal context.    </p>

<p>Those who knew how to move from fear to organization, from the store floor to the factory, not only maintained their professional dignity, but helped build a new model of civilization that has allowed us to get where we are today with discoveries in science and technology that have in fact greatly simplified our existence and extended our lives.  </p>

<p>Today, AI is viewed with the same suspicion: cold, impersonal, dehumanizing; but again, it is not the tool itself that brings about change, but the system&#8217;s ability to absorb it, modulate it, and channel it into new social and professional meanings.  </p>

<h3 class="wp-block-heading">Computerization and office work</h3>

<p>In the late 1980s and early 1990s, the massive introduction of personal computers into offices was experienced, once again, as a threat. It was said that computers would eliminate the need for administrative staff, that paper documents would disappear, and that human beings would become mere software accessories. </p>

<p>Some of the truth is there&#8230;. Many repetitive and manual functions have been replaced by spreadsheets, databases, document management systems. But what was lost in mechanical activities has been compensated for by the emergence of new cognitive and digital responsibilities. Computerization has given rise to jobs that did not exist before: data entry specialists, systems engineers, project managers, IT managers, functional analysts, developers, software engineers, etc. etc.     </p>

<p>The secretary has become office manager, the bookkeeper has learned to use accounting software, the archivist has evolved with digital filing&#8230; The job remained but changed its skin.   </p>

<p>This shows that whenever a technology enters a business, it does not destroy the entire ecosystem, but recombines existing businesses. Some contract, some expand, and some are born from scratch.   </p>

<p>Once again, AI today presents itself with an impact similar to that of computerization: powerful, cross-cutting, barely visible to the naked eye but capable of profoundly redefining processes. And as then, the outcome will depend on only one thing: the willingness of each professional to upgrade.   </p>

<h2 class="wp-block-heading">The wrong predictions of the recent past</h2>

<p>The history of innovation is littered with dire predictions that have in fact never come true, but then again a catastrophist headline pulls much more. Thus, as we have seen in the previous paragraphs, there are many technological innovations that have brought fear of change, a fear unfounded since change has always been positive. A technology is born when one feels the need to use it&#8230;. I start thinking about a hammer only when I have the need to hammer a nail in the wall, not before.     </p>

<p>Prediction errors do not arise from incompetence, but from a recurring methodological error: we regard technology as an active agent, and human beings as passive. In reality, while technology evolves, humans react, adapt, reinvent themselves; and that is precisely where the catastrophic prophecies fall apart. </p>

<h2 class="wp-block-heading">The transformation of existing roles</h2>

<p>One of the most important but least understood pieces of evidence is that most of the changes brought about by AI do not involve the disappearance of roles, but rather their internal transformation. Job titles remain, tasks are rewritten. Priorities, modes of operation, tools and required skills change.    </p>

<p>An advertising graphic designer today can no longer just layout static elements, he has to know visual prompts to generate drafts with DALL-E or Midjourney, he has to know how to edit texts from an SEO perspective, and in some cases he interacts with AI tools that optimize campaigns. The strategic part, however<strong>(message, positioning and tone of voice</strong>) remains totally in the hands of the human because he is the one who has the insight and this is where value is generated.   </p>

<p>An architect once focused solely on CAD software and building constraints can now use generative AI to create design variations, explore new materials, test solutions in virtual environments therefore he is not replaced, <strong>he is empowered!</strong></p>

<p>A copywriter no longer writes everything from scratch, he <strong>orchestrates the text</strong>, combining human intuition with AI-driven text development. His role shifts from production to semantic curation and supervision; and meanwhile a new figure is born in this context, that of the <strong>prompt writer</strong> i.e., the one who knows how to ask AIs in the right way to get the results he expects.   </p>

<p>Even in the seemingly more exposed areas such as customer care or technical support, AI handles the basic levels, but the human rises to the higher levels where empathy, adaptation and negotiation are needed. Professionals who once handled repetitive tickets now become chatbot trainers, quality supervisors, support experience designers, and solve customer care problems where AI lacks the tone and empathy to be able to solve them.   </p>

<figure class="wp-block-pullquote"><blockquote><p>AI doesn&#8217;t take your job, it takes the most repetitive part of your job and asks you to become something more.  </p></blockquote></figure>

<p>In this scenario, those who upgrade, those who are willing to look at AI as a solid ally can level up, not be pushed out. The only real risk is to remain identical to oneself while the world is changing.   </p>

<h2 class="wp-block-heading">Working with AI, not against: the “augmented” professional</h2>

<p>The most common instinctive reaction toward AI is defensive: “I must protect myself,” “I must prevent it from stealing my job.” But this is a static, losing mindset. The question is not whether AI will replace you. The question is whether you will be able to use it to your advantage to become better.     </p>

<p>Generative, conversational, predictive AIs are not enemy entities; they are <strong>tools</strong>. Exactly as electricity, the computer, ecommerce, the combustion engine, the cloud have been. And like any powerful tool, <strong>their value depends on who wields them.</strong>  </p>

<p>Today, a lawyer who knows how to use ChatGPT or Claude to draft a first draft, extract case law references, or simulate a line of argument is no less competent: he or she is faster, more scalable, and more competitive.  </p>

<p>A designer who knows how to use DALL-E to generate a dozen visual proposals in 30 seconds is no less creative; he or she is freer to choose, iterate, and dare.  </p>

<p>A recruiter who uses AI models to extract recurring patterns in resumes and verify them with their own critical eyes is not outdated: <strong>it is empowered</strong>.</p>

<p>AI is not a force that pushes you away. It is a force that comes alongside you if you let it and are willing to let it be part of your work asset. It is just like what happens in a team, the collective intelligence grows if everyone knows their value. In this case, the human remains irreplaceable in the areas where they are needed: intuition, empathy, strategic vision, moral responsibility. The future of work is not human or artificial, it is human and artificial.      </p>

<h2 class="wp-block-heading">The winning mindset: adaptive, not defensive</h2>

<p>There is a common trait shared by all people who have been able to go through major historical changes unscathed: <strong>an adaptive mindset</strong>. Not necessarily brilliant, not always technically brilliant, but able to read the signs of change, accept them and act accordingly. Fighting against something inevitable is not only futile it is a waste of valuable time. These people did not resist: they studied, observed, experimented. They understood that the real danger is not change itself, but immobility.     </p>

<p>Today more than ever, this principle comes back to center stage. In a world evolving at the speed of light, value is no longer in the position you occupy but in the speed with which you can move, relocate, and reinvent yourself. AI does not reward those with desks, but those with vision, and it silently and inexorably punishes those who are entrenched in nostalgia for “the way things used to work.”    </p>

<p>So many people will stand still because they are under the illusion that change is optional. It isn&#8217;t! Adapting is the only possible way forward. Resisting is not a strategy; it is only a slow sentence. Yet many prefer to deny, reject, belittle what they do not know in perfect Italian style.      </p>

<p>A totally renewed mental posture is needed today more than ever. Not the “I defend myself” one but the “I prepare myself” one. Not that of “I am satisfied with what I know,” but that of “I want to understand how I can transform myself and what I can become for the better.”    </p>

<p>You don&#8217;t need to be an engineer, you need to be curious, open and quick to learn. It takes reading, trying, getting help from an AI tool, wondering how it might benefit your profession. Not everything works, but just trying puts you ahead of those who don&#8217;t even ask the question.    </p>

<p>After all, it is not the cechnology that decides who stays and who goes.<br />It is the mindset with which one approaches it. <br />Those with adaptive mindsets not only survive but <strong>flourish!</strong></p>

<h2 class="wp-block-heading">Work is not dying, it is evolving and asking us to do the same</h2>

<p>We have seen how each revolution has had its own monster to fear. Each time the worst was feared and each time the reality turned out to be different: not less work, but different work. Not less humanity, but humanity distributed, rewritten and sometimes forced out of the comfort zone.    </p>

<p>AI is not different it is just faster, deeper and more persuasive, but it is not here to destroy work, it is here to shake up what was already not working: weak roles, unproductive repetitiveness, inefficient bureaucracy, unmotivated protections. It is scary because it lays <strong>bare the futility</strong>, and because <strong>it makes evident what until yesterday we could still pretend not to see.</strong> </p>

<p>But for those with talent, for those willing to learn, for those ready for change <strong>, AI is not a threat, it is a lever.</strong> An opportunity to level up and shed the superfluous, something to refocus on what really matters: intuition, planning, relationships, vision.  </p>

<p>Work is not dying; it is changing shape. It is asking each of us, “What are you willing to become?”   </p>

<p>Those who can respond with clarity, courage and adaptive intelligence will not only not lose their place, they will build something new and superior.  </p>

<p>[starbox]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/will-ai-make-people-lose-jobs/">Will AI make people lose jobs?</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Evaluation of the level of Blur by Laplacian Variance</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/evaluation-of-the-level-of-blur-by-laplacian-variance/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Sun, 15 Jun 2025 09:49:09 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[blur detection]]></category>
		<category><![CDATA[blur metrics]]></category>
		<category><![CDATA[Canvas API]]></category>
		<category><![CDATA[celiac applications]]></category>
		<category><![CDATA[Client-side OCR]]></category>
		<category><![CDATA[computer vision]]></category>
		<category><![CDATA[frontend side image validation]]></category>
		<category><![CDATA[image analysis in JavaScript]]></category>
		<category><![CDATA[image quality]]></category>
		<category><![CDATA[image sharpness]]></category>
		<category><![CDATA[Laplacian filter]]></category>
		<category><![CDATA[OCR optimization]]></category>
		<category><![CDATA[photo quality control]]></category>
		<category><![CDATA[photography app]]></category>
		<category><![CDATA[real-time image analysis]]></category>
		<category><![CDATA[sharpness measurement]]></category>
		<category><![CDATA[sharpness score]]></category>
		<category><![CDATA[variance of the Laplacian]]></category>
		<category><![CDATA[web image algorithm]]></category>
		<category><![CDATA[web-based image processing]]></category>
		<guid isPermaLink="false">https://renor.it/evaluation-of-the-level-of-blur-by-laplacian-variance/</guid>

					<description><![CDATA[<p>Our App for people with celiac disease and food intolerances, CeliachIA, allows users to instantly check for the presence of gluten in products that are not already database-censored: the user photographs the ingredient list on the label and a fine-tuned computer vision model processes the image, returning the answer in seconds. During the development phase, [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/evaluation-of-the-level-of-blur-by-laplacian-variance/">Evaluation of the level of Blur by Laplacian Variance</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Our App for people with celiac disease and food intolerances, <strong>CeliachIA</strong>, allows users to instantly check for the presence of gluten in products that are not already database-censored: the user photographs the ingredient list on the label and a <strong>fine-tuned computer vision</strong> model processes the image, returning the answer in seconds.</p>

<p>During the development phase, we ran into a crucial hurdle: ensuring that the photograph of the ingredients was sharp enough to allow accurate text extraction (OCR) and, consequently, reliable analysis. On the backend, <strong>Google Cloud Vision</strong> already provides an OCR confidence indicator; by applying appropriate thresholds we could decide whether to continue with the analysis or reject the image for poor quality. However, delegating this control to the cloud comes at an unnecessary cost; we still pay for the invocation of an external service even when the photo is obviously unusable.  </p>

<p>Therefore, to minimize costs, it was essential to move quality control <strong>to the client</strong>. We developed a JavaScript function, based on the <strong>Laplacian variance</strong>, that evaluates image sharpness in real time: the library can be run directly during camera preview, avoiding sending out-of-focus or blurry shots to the backend. In this way we achieve a double optimization: the user experience improves (the “blurry image” feedback is immediate) and cloud processing costs are significantly reduced.  </p>

<h2 class="wp-block-heading">Measuring the sharpness of an image with the Laplacian variance</h2>

<p>The perceptual quality of a photograph is highly dependent on sharpness: an image out of focus or affected by motion blur looks unattractive and, in various application areas (computer vision, diagnostics, mobile photography), can compromise the entire automatic analysis process.  </p>

<p>There are numerous techniques for quantitatively estimating the level of blur. In this paper, we present a fast approach, free of external dependencies and suitable for real time, based on the <strong>variance of the Laplacian operator</strong>. We will start with the theoretical fundamentals and then arrive at a complete implementation in JavaScript, accompanied by a small Web interface for testing purposes.  </p>

<p>The ultimate goal will be to build a widget that assigns a score from 1 to 10 to the sharpness of any image uploaded by the user.  </p>

<h2 class="wp-block-heading">Sharpness as a frequency issue</h2>

<p>Each digital image can be seen as the sum of components at different spatial frequencies:  </p>

<ul class="wp-block-list">
<li><strong>Low frequencies</strong> have slow variations, uniform areas, and smooth gradients</li>



<li><strong>High frequencies</strong> result in abrupt transitions, edges, fine details.</li>
</ul>

<p>A blur acts as a <strong>low-pass filter</strong>, that is, it attenuates high frequencies. So the blurrier an image is, the less residual energy we will have in the high-frequency spectrum.   </p>

<p>The project will then work on translating this observation into a single number that is easy to interpret.  </p>

<h2 class="wp-block-heading">The variance of the Laplacian: from millions of pixels to a single number</h2>

<p>Once we apply the Laplacian filter to the image, we get a new matrix where each value represents the local change in light intensity. But how can we condense this information into a <strong>single numerical value</strong> that objectively describes how sharp the image is? </p>

<p>The simplest and most effective answer is: <strong>calculate the variance of these values</strong>.<br />Because of what was said before, a well-focused image has many sharp edges and transitions therefore the values of the Laplacian are widely distributed, both positive and negative: there is therefore high variance.<br />A blurred image, blurs the edges therefore the values of the Laplacian are close to zero and little variable: there is therefore low variance.  </p>

<p>The variance measures precisely how far the values deviate from the mean: the larger this deviation, the more detail and sharpness the image contains.  </p>

<h2 class="wp-block-heading">Mathematical definition</h2>

<p>Given a Laplacian response matrix <img decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-f45bc4676758faced56a31d4afe2804e_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#76;&#95;&#105;&#36;" title="Rendered by QuickLaTeX.com" height="15" width="16" style="vertical-align: -3px;"/>, where <img decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-9f6be33c72982af4b393d661ed93d3e4_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#105;&#36;" title="Rendered by QuickLaTeX.com" height="12" width="5" style="vertical-align: 0px;"/> denotes each valid pixel (excluding the edge), we calculate:</p>

<p><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-1e65f0e6abc96cd6a7cbcf0b12139745_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#109;&#117;&#32;&#61;&#32;&#102;&#114;&#97;&#99;&#123;&#49;&#125;&#123;&#78;&#125;&#115;&#117;&#109;&#95;&#123;&#105;&#61;&#49;&#125;&#94;&#123;&#78;&#125;&#32;&#76;&#95;&#105;&#44;&#32;&#113;&#113;&#117;&#97;&#100;&#32;&#115;&#105;&#103;&#109;&#97;&#94;&#50;&#32;&#61;&#32;&#102;&#114;&#97;&#99;&#123;&#49;&#125;&#123;&#78;&#125;&#115;&#117;&#109;&#95;&#123;&#105;&#61;&#49;&#125;&#94;&#123;&#78;&#125;&#40;&#76;&#95;&#105;&#32;&#45;&#32;&#109;&#117;&#41;&#94;&#50;" title="Rendered by QuickLaTeX.com" height="20" width="523" style="vertical-align: -5px;"/></p>

<p>Where:</p>

<ul class="wp-block-list">
<li><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-82c54c2016cfb04537f3a6ddd7a692b0_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#109;&#117;&#36;" title="Rendered by QuickLaTeX.com" height="8" width="24" style="vertical-align: 0px;"/> is the average of the Laplacian values;</li>



<li><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-9675ccd83f47754d4044b5cb35fda688_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#78;&#36;" title="Rendered by QuickLaTeX.com" height="12" width="13" style="vertical-align: 0px;"/> is the total number of pixels involved (usually $(w &#8211; 2)(h &#8211; 2)$);</li>



<li><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-5b6996c2cdce5d94d6037c91de4b2f62_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#115;&#105;&#103;&#109;&#97;&#94;&#50;&#36;" title="Rendered by QuickLaTeX.com" height="19" width="52" style="vertical-align: -4px;"/> is the <strong>variance of the Laplacian</strong>, i.e., our sharpness indicator.</li>
</ul>

<p>Now we need to go from a raw value to an interpretable score.  </p>

<p>Values of <img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-5b6996c2cdce5d94d6037c91de4b2f62_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#115;&#105;&#103;&#109;&#97;&#94;&#50;&#36;" title="Rendered by QuickLaTeX.com" height="19" width="52" style="vertical-align: -4px;"/> depend on hardware and environmental variables: resolution, noise, JPEG compression, lens quality. Therefore, it does not make sense to use a fixed threshold: we need to <strong>normalize</strong> the value against an empirical range. </p>

<p>We define:</p>

<ul class="wp-block-list">
<li><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-326c25ab934da5af2387a852942c9351_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#84;&#95;&#123;&#109;&#105;&#110;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="38" style="vertical-align: -3px;"/>$: typical variance of a very blurry photo → corresponds to <strong>score 1</strong>;</li>



<li><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-98cd1cdde06d8db0f191ed6f2eb2313f_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#36;&#84;&#95;&#123;&#109;&#97;&#120;&#125;" title="Rendered by QuickLaTeX.com" height="15" width="40" style="vertical-align: -3px;"/>$: typical variance of a perfectly sharp photo → corresponds to <strong>score 10</strong>.</li>
</ul>

<p>Therefore:</p>

<p><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-d81939d6821640c1b6ec66f97df8ae2b_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#116;&#101;&#120;&#116;&#123;&#115;&#99;&#111;&#114;&#101;&#125;&#32;&#61;&#32;&#49;&#32;&#43;&#32;&#57;&#32;&#116;&#105;&#109;&#101;&#115;&#32;&#111;&#112;&#101;&#114;&#97;&#116;&#111;&#114;&#110;&#97;&#109;&#101;&#123;&#99;&#108;&#97;&#109;&#112;&#125;&#108;&#101;&#102;&#116;&#40;&#102;&#114;&#97;&#99;&#123;&#115;&#105;&#103;&#109;&#97;&#94;&#50;&#32;&#45;&#32;&#84;&#95;&#123;&#109;&#105;&#110;&#125;&#125;&#123;&#84;&#95;&#123;&#109;&#97;&#120;&#125;&#32;&#45;&#32;&#84;&#95;&#123;&#109;&#105;&#110;&#125;&#125;&#44;&#32;&#48;&#44;&#32;&#49;&#114;&#105;&#103;&#104;&#116;&#41;" title="Rendered by QuickLaTeX.com" height="20" width="702" style="vertical-align: -5px;"/></p>

<p>After calculation, the value is rounded to the nearest integer to return a score between 1 and 10. This approach also makes the result readable for the end user, as well as useful for software-side control logic. </p>

<h2 class="wp-block-heading">Implementation in JavaScript</h2>

<p>The algorithm can be implemented in pure JavaScript using the Canvas API to access the pixels of an image uploaded or taken by camera. The following function takes as input an HTML element ( <code>&lt;img&gt;</code>,  <code>&lt;canvas&gt;</code>,  <code>&lt;video&gt;</code>  o  <code>ImageBitmap</code>) and returns an object with  <code>score</code>  (1-10) e  <code>variance</code>.</p>

<div class="wp-block-kevinbatdorf-code-block-pro" data-code-block-pro-font-family="Code-Pro-JetBrains-Mono" style="font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2"><span style="display:block;padding:16px 0 0 16px;margin-bottom:-1px;width:100%;text-align:left;background-color:#1E1E1E"><svg xmlns="http://www.w3.org/2000/svg" width="54" height="14" viewbox="0 0 54 14"><g fill="none"></g></svg></span><span role="button" style="color:#D4D4D4;display:none" aria-label="Copy" class="code-block-pro-copy-button"><textarea class="code-block-pro-copy-button-textarea" aria-hidden="true" readonly>/**
 * Evaluate image sharpness using Laplacian variance.
 * @param {HTMLImageElement|HTMLCanvasElement|HTMLVideoElement|ImageBitmap} source
 * @param {{thresholdMin?: number, thresholdMax?: number}=} opts
 * @return {Promise&lt;{score: number, variance: number}&gt;}
 */
export async function blurMeter(source, opts = {}) {
 const T_min = opts.thresholdMin ?? 9; // calibrated: blurry photo
 const T_max = opts.thresholdMax ?? 50; // calibrated: sharp photo  

  // 1. Prepare off-screen canvas
 const w = source.width || source.videoWidth || source.naturalWidth;
 const h = source.height || source.videoHeight || source.naturalHeight;
 if (!w || !h) throw new Error(“Source has invalid dimensions”); 

  const off = typeof OffscreenCanvas === “function”
? new OffscreenCanvas(w, h)
: (() =&gt; { const c = document.createElement(“canvas”); c.width = w; c.height = h; return c; })();
 const ctx = off.getContext(“2d”);
 ctx.drawImage(source, 0, 0, w, h);
 const { data } = ctx.getImageData(0, 0, w, h); 

  // 2. Convert to grayscale
 const gray = new Float32Array(w * h);
 for (let i = 0, j = 0; i &lt; data.length; i += 4, ++j) {
 gray[j] = 0.299 * data[i] + 0.587 * data[i + 1] + 0.114 * data[i + 2];
 } 

  // 3. Apply Laplacian (4-neighbour kernel)
 const lap = new Float32Array(w * h);
 const idx = (x, y) =&gt; y * w + x;
 for (let y = 1; y &lt; h &#8211; 1; ++y) {
 for (let x = 1; x &lt; w &#8211; 1; ++x) {
 const i = idx(x, y);
 lap[i] = 4 * gray[i] &#8211; gray[idx(x &#8211; 1, y)] &#8211; gray[idx(x + 1, y)] &#8211; gray[idx(x, y &#8211; 1)] &#8211; gray[idx(x, y + 1)];
 }
 } 

  // 4. Compute variance
 let sum = 0, sumSq = 0, count = (w &#8211; 2) * (h &#8211; 2);
 for (let y = 1; y &lt; h &#8211; 1; ++y) {
 for (let x = 1; x &lt; w &#8211; 1; ++x) {
 const v = lap[idx(x, y)];
 sum += v;
 sumSq += v * v;
 }
 }
 const mean = sum / count;
 const variance = (sumSq / count) &#8211; (mean * mean); 

  // 5. Map to score
 const norm = Math.max(0, Math.min(1, (variance &#8211; T_min) / (T_max &#8211; T_min));
 const score = Math.round(1 + 9 * norm); 

  return { score, variance };
}</textarea><svg xmlns="http://www.w3.org/2000/svg" style="width:24px;height:24px" viewbox="0 0 24 24"><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4"></path><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2"></path></svg></span><pre class="shiki dark-plus" style="background-color: #1E1E1E"><code><span class="line"><span style="color: #6A9955">/**</span></span>
<span class="line"><span style="color: #6A9955"> * Evaluate image sharpness using Laplacian variance.</span></span>
<span class="line"><span style="color: #6A9955"> * </span><span style="color: #569CD6">@param</span><span style="color: #6A9955"> </span><span style="color: #4EC9B0">{HTMLImageElement|HTMLCanvasElement|HTMLVideoElement|ImageBitmap}</span><span style="color: #6A9955"> </span><span style="color: #9CDCFE">source</span></span>
<span class="line"><span style="color: #6A9955"> * </span><span style="color: #569CD6">@param</span><span style="color: #6A9955"> </span><span style="color: #4EC9B0">{{thresholdMin?: number, thresholdMax?: number}=}</span><span style="color: #6A9955"> </span><span style="color: #9CDCFE">opts</span></span>
<span class="line"><span style="color: #6A9955"> * </span><span style="color: #569CD6">@return</span><span style="color: #6A9955"> </span><span style="color: #4EC9B0">{Promise&lt;{score: number, variance: number}&gt;}</span></span>
<span class="line"><span style="color: #6A9955"> */</span></span>
<span class="line"><span style="color: #C586C0">export</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">async</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">blurMeter</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">opts</span><span style="color: #D4D4D4"> = {}) {</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">T_min</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">opts</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">thresholdMin</span><span style="color: #D4D4D4"> ?? </span><span style="color: #B5CEA8">9</span><span style="color: #D4D4D4">;    </span><span style="color: #6A9955">// calibrated: blurry photo</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">T_max</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">opts</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">thresholdMax</span><span style="color: #D4D4D4"> ?? </span><span style="color: #B5CEA8">50</span><span style="color: #D4D4D4">;   </span><span style="color: #6A9955">// calibrated: sharp photo</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #6A9955">// 1. Prepare off-screen canvas</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">w</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">width</span><span style="color: #D4D4D4">  || </span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">videoWidth</span><span style="color: #D4D4D4">  || </span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">naturalWidth</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">h</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">height</span><span style="color: #D4D4D4"> || </span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">videoHeight</span><span style="color: #D4D4D4"> || </span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">naturalHeight</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #C586C0">if</span><span style="color: #D4D4D4"> (!</span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4"> || !</span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4">) </span><span style="color: #C586C0">throw</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">Error</span><span style="color: #D4D4D4">(</span><span style="color: #CE9178">"Source has invalid dimensions"</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">off</span><span style="color: #D4D4D4"> = </span><span style="color: #569CD6">typeof</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">OffscreenCanvas</span><span style="color: #D4D4D4"> === </span><span style="color: #CE9178">"function"</span></span>
<span class="line"><span style="color: #D4D4D4">    ? </span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">OffscreenCanvas</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4">)</span></span>
<span class="line"><span style="color: #D4D4D4">    : (() </span><span style="color: #569CD6">=&gt;</span><span style="color: #D4D4D4"> { </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">c</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">document</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">createElement</span><span style="color: #D4D4D4">(</span><span style="color: #CE9178">"canvas"</span><span style="color: #D4D4D4">); </span><span style="color: #9CDCFE">c</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">width</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">c</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">height</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4">; </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">c</span><span style="color: #D4D4D4">; })();</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">ctx</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">off</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">getContext</span><span style="color: #D4D4D4">(</span><span style="color: #CE9178">"2d"</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #9CDCFE">ctx</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">drawImage</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">source</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> { </span><span style="color: #4FC1FF">data</span><span style="color: #D4D4D4"> } = </span><span style="color: #9CDCFE">ctx</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">getImageData</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #6A9955">// 2. Convert to grayscale</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">gray</span><span style="color: #D4D4D4"> = </span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">Float32Array</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #569CD6">let</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">j</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">data</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">length</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4"> += </span><span style="color: #B5CEA8">4</span><span style="color: #D4D4D4">, ++</span><span style="color: #9CDCFE">j</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #9CDCFE">gray</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">j</span><span style="color: #D4D4D4">] = </span><span style="color: #B5CEA8">0.299</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">data</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4">] + </span><span style="color: #B5CEA8">0.587</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">data</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4"> + </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">] + </span><span style="color: #B5CEA8">0.114</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">data</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4"> + </span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">  }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #6A9955">// 3. Apply Laplacian (4-neighbour kernel)</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">lap</span><span style="color: #D4D4D4"> = </span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">Float32Array</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">idx</span><span style="color: #D4D4D4"> = (</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4">) </span><span style="color: #569CD6">=&gt;</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4"> + </span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #569CD6">let</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; ++</span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #569CD6">let</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; ++</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">i</span><span style="color: #D4D4D4"> = </span><span style="color: #DCDCAA">idx</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">lap</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4">] = </span><span style="color: #B5CEA8">4</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">gray</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">i</span><span style="color: #D4D4D4">] - </span><span style="color: #9CDCFE">gray</span><span style="color: #D4D4D4">[</span><span style="color: #DCDCAA">idx</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4">)] - </span><span style="color: #9CDCFE">gray</span><span style="color: #D4D4D4">[</span><span style="color: #DCDCAA">idx</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4"> + </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4">)] - </span><span style="color: #9CDCFE">gray</span><span style="color: #D4D4D4">[</span><span style="color: #DCDCAA">idx</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">)] - </span><span style="color: #9CDCFE">gray</span><span style="color: #D4D4D4">[</span><span style="color: #DCDCAA">idx</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4"> + </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">)];</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line"><span style="color: #D4D4D4">  }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #6A9955">// 4. Compute variance</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">let</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">sum</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">sumSq</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">count</span><span style="color: #D4D4D4"> = (</span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4">) * (</span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #569CD6">let</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">h</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; ++</span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #569CD6">let</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">w</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; ++</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">v</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">lap</span><span style="color: #D4D4D4">[</span><span style="color: #DCDCAA">idx</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">x</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">y</span><span style="color: #D4D4D4">)];</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">sum</span><span style="color: #D4D4D4"> += </span><span style="color: #9CDCFE">v</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">sumSq</span><span style="color: #D4D4D4"> += </span><span style="color: #9CDCFE">v</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">v</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line"><span style="color: #D4D4D4">  }</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">mean</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">sum</span><span style="color: #D4D4D4"> / </span><span style="color: #9CDCFE">count</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">variance</span><span style="color: #D4D4D4"> = (</span><span style="color: #9CDCFE">sumSq</span><span style="color: #D4D4D4"> / </span><span style="color: #9CDCFE">count</span><span style="color: #D4D4D4">) - (</span><span style="color: #9CDCFE">mean</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">mean</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #6A9955">// 5. Map to score</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">norm</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">Math</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">max</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">Math</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">min</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">, (</span><span style="color: #9CDCFE">variance</span><span style="color: #D4D4D4"> - </span><span style="color: #9CDCFE">T_min</span><span style="color: #D4D4D4">) / (</span><span style="color: #9CDCFE">T_max</span><span style="color: #D4D4D4"> - </span><span style="color: #9CDCFE">T_min</span><span style="color: #D4D4D4">)));</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #569CD6">const</span><span style="color: #D4D4D4"> </span><span style="color: #4FC1FF">score</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">Math</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">round</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4"> + </span><span style="color: #B5CEA8">9</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">norm</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> { </span><span style="color: #9CDCFE">score</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">variance</span><span style="color: #D4D4D4"> };</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span></code></pre></div>

<p>This function is lightweight, self-contained, and can be performed directly client-side, such as in a live camera web application or image analysis.</p>

<p>Next step: we create a simple interface with HTML and Tailwind to use this function interactively.</p>

<h2 class="wp-block-heading">HTML + Tailwind interface</h2>

<p>We now consume the function in a simple user interface made in HTML and Tailwind that allows the user to load an image, preview it, and immediately receive a sharpness score with a colored bar from red (blurry) to green (sharp).</p>

<div class="wp-block-kevinbatdorf-code-block-pro" data-code-block-pro-font-family="Code-Pro-JetBrains-Mono" style="font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2"><span style="display:block;padding:16px 0 0 16px;margin-bottom:-1px;width:100%;text-align:left;background-color:#1E1E1E"><svg xmlns="http://www.w3.org/2000/svg" width="54" height="14" viewbox="0 0 54 14"><g fill="none"></g></svg></span><span role="button" style="color:#D4D4D4;display:none" aria-label="Copy" class="code-block-pro-copy-button"><textarea class="code-block-pro-copy-button-textarea" aria-hidden="true" readonly>&lt;!DOCTYPE html&gt;
&lt;html lang=“en”&gt;
&lt;head&gt;
  &lt;meta charset=“UTF-8”&gt;
  &lt;title&gt;Blur Meter Demo&lt;/title&gt;
  &lt;!&#8211; Tailwind CSS via CDN &#8211;&gt;
  &lt;script src=“https://cdn.tailwindcss.com”&gt;&lt;/script&gt;
  &lt;meta name=“viewport” content=“width=device-width, initial-scale=1”&gt;
&lt;/head&gt;
&lt;body class=“min-h-screen bg-slate-100 flex items-center justify-center p-4”&gt;
  &lt;div class=“w-full max-w-lg bg-white shadow-xl rounded-2xl p-6 space-y-6”&gt;
    &lt;h1 class=“text-2xl font-semibold text-center”&gt;Image Blur Meter&lt;/h1&gt;

    &lt;!&#8211; File input &#8211;&gt;
    &lt;label class=“flex flex-col items-center justify-center w-full h-40 px-4 transition bg-white border-2 border-dashed rounded-lg cursor-pointer hover:border-indigo-500”&gt;
      &lt;span class=“text-sm text-gray-500”&gt;Click here to upload an image&lt;/span&gt;
      &lt;input id=“uploader” type=“file” accept=“image/*” class=“hidden”&gt;
    &lt;/label&gt;

    &lt;!&#8211; Image preview &#8211;&gt;
    &lt;div id=“previewWrapper” class=“hidden”&gt;
      &lt;img id=“preview” alt=“preview” class=“mx-auto max-h-64 rounded-lg shadow-md”/&gt;
    &lt;/div&gt;

    &lt;!&#8211; Results &#8211;&gt;
    &lt;div id=“result” class=“hidden space-y-3”&gt;
      &lt;p id=“scoreText” class=“text-center text-lg font-medium”&gt;&lt;/p&gt;

      &lt;!&#8211; Progress bar &#8211;&gt;
      &lt;div class=“w-full bg-gray-200 rounded-full h-4”&gt;
        &lt;div id=“scoreBar” class=“h-4 rounded-full transition-all duration-500” style=“width:0%”&gt;&lt;/div&gt;
      &lt;/div&gt;

      &lt;p id=“varianceText” class=“text-center text-sm text-gray-500”&gt;&lt;/p&gt;
    &lt;/div&gt;
  &lt;/div&gt;

  &lt;script type=“module”&gt;
 import { blurMeter } from “./js/blur-meter.js”;

  const uploader = document.getElementById(“uploader”);
 const preview = document.getElementById(“preview”);
 const previewWrap = document.getElementById(“previewWrapper”);
 const scoreText = document.getElementById(“scoreText”);
 const varianceText = document.getElementById(“varianceText”);
 const scoreBar = document.getElementById(“scoreBar”);
 const resultBlock = document.getElementById(“result”);

  const scoreToColor = score =&gt; {
 const t = (score &#8211; 1) / 9;
 const r = Math.round(220 + (22 &#8211; 220) * t);
 const g = Math.round( 38 + (163 &#8211; 38) * t);
 const b = Math.round( 38 + ( 74 &#8211; 38) * t);
 return `rgb(${r},${g},${b})`;
 };

  uploader.addEventListener(“change”, async e =&gt; {
 const file = e.target.files[0];
 if (!file) return;

  const url = URL.createObjectURL(file);
 preview.src = url;
 previewWrap.classList.remove(“hidden”);

  await new Promise(res =&gt; preview.onload = res);

  const { score, variance } = await blurMeter(preview);

  scoreText.textContent = `Sharpness Score: ${score}/10`;
 varianceText.textContent = `Laplacian variance: ${variance.toFixed(2)}`;
 const percent = ((score &#8211; 1) / 9) * 100;
 scoreBar.style.width = percent + “%”;
 scoreBar.style.backgroundColor = scoreToColor(score);
 resultBlock.classList.remove(“hidden”);

  URL.revokeObjectURL(url);
 });
  &lt;/script&gt;
&lt;/body&gt;
&lt;/html&gt;</textarea><svg xmlns="http://www.w3.org/2000/svg" style="width:24px;height:24px" viewbox="0 0 24 24"><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4"></path><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2"></path></svg></span><pre class="shiki dark-plus" style="background-color: #1E1E1E"><code><span class="line"><span style="color: #D4D4D4">&lt;!</span><span style="color: #4FC1FF">DOCTYPE</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">html</span><span style="color: #D4D4D4">&gt;</span></span>
<span class="line"><span style="color: #808080">&lt;</span><span style="color: #569CD6">html</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">lang</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"it"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #808080">&lt;</span><span style="color: #569CD6">head</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">meta</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">charset</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"UTF-8"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">title</span><span style="color: #808080">&gt;</span><span style="color: #D4D4D4">Blur Meter Demo</span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">title</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">  &lt;!-- Tailwind CSS via CDN --&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">script</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">src</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"https://cdn.tailwindcss.com"</span><span style="color: #808080">&gt;&lt;/</span><span style="color: #569CD6">script</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">meta</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">name</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"viewport"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">content</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"width=device-width, initial-scale=1"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #808080">&lt;/</span><span style="color: #569CD6">head</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #808080">&lt;</span><span style="color: #569CD6">body</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"min-h-screen bg-slate-100 flex items-center justify-center p-4"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">div</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"w-full max-w-lg bg-white shadow-xl rounded-2xl p-6 space-y-6"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">h1</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"text-2xl font-semibold text-center"</span><span style="color: #808080">&gt;</span><span style="color: #D4D4D4">Image Blur Meter</span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">h1</span><span style="color: #808080">&gt;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    &lt;!-- File input --&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">label</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"flex flex-col items-center justify-center w-full h-40 px-4 transition bg-white border-2 border-dashed rounded-lg cursor-pointer hover:border-indigo-500"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">span</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"text-sm text-gray-500"</span><span style="color: #808080">&gt;</span><span style="color: #D4D4D4">Click here to upload an image</span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">span</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">input</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">id</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"uploader"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">type</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"file"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">accept</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"image/*"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"hidden"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">label</span><span style="color: #808080">&gt;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    &lt;!-- Image preview --&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">div</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">id</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"previewWrapper"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"hidden"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">img</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">id</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"preview"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">alt</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"preview"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"mx-auto max-h-64 rounded-lg shadow-md"</span><span style="color: #808080">/&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">div</span><span style="color: #808080">&gt;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    &lt;!-- Results --&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">div</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">id</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"result"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"hidden space-y-3"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">p</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">id</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"scoreText"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"text-center text-lg font-medium"</span><span style="color: #808080">&gt;&lt;/</span><span style="color: #569CD6">p</span><span style="color: #808080">&gt;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">      &lt;!-- Progress bar --&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">div</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"w-full bg-gray-200 rounded-full h-4"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">div</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">id</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"scoreBar"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"h-4 rounded-full transition-all duration-500"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">style</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"width:0%"</span><span style="color: #808080">&gt;&lt;/</span><span style="color: #569CD6">div</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">div</span><span style="color: #808080">&gt;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">p</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">id</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"varianceText"</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">class</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"text-center text-sm text-gray-500"</span><span style="color: #808080">&gt;&lt;/</span><span style="color: #569CD6">p</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">div</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">div</span><span style="color: #808080">&gt;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;</span><span style="color: #569CD6">script</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">type</span><span style="color: #D4D4D4">=</span><span style="color: #CE9178">"module"</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #D4D4D4">    import </span><span style="color: #569CD6">{</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">blurMeter</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">}</span><span style="color: #D4D4D4"> from './js/blur-meter.js';</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    const uploader      = document.getElementById('uploader');</span></span>
<span class="line"><span style="color: #D4D4D4">    const preview       = document.getElementById('preview');</span></span>
<span class="line"><span style="color: #D4D4D4">    const previewWrap   = document.getElementById('previewWrapper');</span></span>
<span class="line"><span style="color: #D4D4D4">    const scoreText     = document.getElementById('scoreText');</span></span>
<span class="line"><span style="color: #D4D4D4">    const varianceText  = document.getElementById('varianceText');</span></span>
<span class="line"><span style="color: #D4D4D4">    const scoreBar      = document.getElementById('scoreBar');</span></span>
<span class="line"><span style="color: #D4D4D4">    const resultBlock   = document.getElementById('result');</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    const scoreToColor = score =&gt; </span><span style="color: #569CD6">{</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">t</span><span style="color: #D4D4D4"> = (</span><span style="color: #9CDCFE">score</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">) / </span><span style="color: #B5CEA8">9</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">r</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">Math</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">round</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">220</span><span style="color: #D4D4D4"> + (</span><span style="color: #B5CEA8">22</span><span style="color: #D4D4D4">  - </span><span style="color: #B5CEA8">220</span><span style="color: #D4D4D4">) * </span><span style="color: #9CDCFE">t</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">g</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">Math</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">round</span><span style="color: #D4D4D4">( </span><span style="color: #B5CEA8">38</span><span style="color: #D4D4D4"> + (</span><span style="color: #B5CEA8">163</span><span style="color: #D4D4D4"> -  </span><span style="color: #B5CEA8">38</span><span style="color: #D4D4D4">) * </span><span style="color: #9CDCFE">t</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">b</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">Math</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">round</span><span style="color: #D4D4D4">( </span><span style="color: #B5CEA8">38</span><span style="color: #D4D4D4"> + ( </span><span style="color: #B5CEA8">74</span><span style="color: #D4D4D4"> -  </span><span style="color: #B5CEA8">38</span><span style="color: #D4D4D4">) * </span><span style="color: #9CDCFE">t</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">return</span><span style="color: #D4D4D4"> </span><span style="color: #CE9178">`rgb(</span><span style="color: #569CD6">${</span><span style="color: #9CDCFE">r</span><span style="color: #569CD6">}</span><span style="color: #CE9178">,</span><span style="color: #569CD6">${</span><span style="color: #9CDCFE">g</span><span style="color: #569CD6">}</span><span style="color: #CE9178">,</span><span style="color: #569CD6">${</span><span style="color: #9CDCFE">b</span><span style="color: #569CD6">}</span><span style="color: #CE9178">)`</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">}</span><span style="color: #D4D4D4">;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    uploader.addEventListener('change', async e =&gt; </span><span style="color: #569CD6">{</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">file</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">e</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">target</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">files</span><span style="color: #D4D4D4">[</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #DCDCAA">if</span><span style="color: #D4D4D4"> (!</span><span style="color: #9CDCFE">file</span><span style="color: #D4D4D4">) </span><span style="color: #9CDCFE">return</span><span style="color: #D4D4D4">;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">url</span><span style="color: #D4D4D4"> = </span><span style="color: #4FC1FF">URL</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">createObjectURL</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">file</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">preview</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">src</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">url</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">previewWrap</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">classList</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">remove</span><span style="color: #D4D4D4">(</span><span style="color: #CE9178">'hidden'</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #C586C0">await</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #4EC9B0">Promise</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">res</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">=&gt;</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">preview</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">onload</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">res</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> { </span><span style="color: #9CDCFE">score</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">variance</span><span style="color: #D4D4D4"> } = </span><span style="color: #C586C0">await</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">blurMeter</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">preview</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">scoreText</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">textContent</span><span style="color: #D4D4D4">    = </span><span style="color: #CE9178">`Sharpness Score: </span><span style="color: #569CD6">${</span><span style="color: #9CDCFE">score</span><span style="color: #569CD6">}</span><span style="color: #CE9178">/10`</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">varianceText</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">textContent</span><span style="color: #D4D4D4"> = </span><span style="color: #CE9178">`Laplacian variance: </span><span style="color: #569CD6">${</span><span style="color: #9CDCFE">variance</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">toFixed</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4">)</span><span style="color: #569CD6">}</span><span style="color: #CE9178">`</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">const</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">percent</span><span style="color: #D4D4D4">            = ((</span><span style="color: #9CDCFE">score</span><span style="color: #D4D4D4"> - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">) / </span><span style="color: #B5CEA8">9</span><span style="color: #D4D4D4">) * </span><span style="color: #B5CEA8">100</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">scoreBar</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">style</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">width</span><span style="color: #D4D4D4">     = </span><span style="color: #9CDCFE">percent</span><span style="color: #D4D4D4"> + </span><span style="color: #CE9178">'%'</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">scoreBar</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">style</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">backgroundColor</span><span style="color: #D4D4D4"> = </span><span style="color: #DCDCAA">scoreToColor</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">score</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">resultBlock</span><span style="color: #D4D4D4">.</span><span style="color: #9CDCFE">classList</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">remove</span><span style="color: #D4D4D4">(</span><span style="color: #CE9178">'hidden'</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">      </span><span style="color: #4FC1FF">URL</span><span style="color: #D4D4D4">.</span><span style="color: #DCDCAA">revokeObjectURL</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">url</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">}</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">  </span><span style="color: #808080">&lt;/</span><span style="color: #569CD6">script</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #808080">&lt;/</span><span style="color: #569CD6">body</span><span style="color: #808080">&gt;</span></span>
<span class="line"><span style="color: #808080">&lt;/</span><span style="color: #569CD6">html</span><span style="color: #808080">&gt;</span></span></code></pre></div>

<h2 class="wp-block-heading">Functional testing</h2>

<p>We tested the function on three different types of image&#8230;</p>

<h3 class="wp-block-heading">Completely out of focus</h3>

<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="550" height="692" src="https://renor.it/wp-content/uploads/2025/06/image-2.webp" alt="" class="wp-image-858" srcset="https://renor.it/wp-content/uploads/2025/06/image-2.webp 550w, https://renor.it/wp-content/uploads/2025/06/image-2-238x300.webp 238w" sizes="auto, (max-width: 550px) 100vw, 550px" /></figure>

<h3 class="wp-block-heading">Moderately blurred</h3>

<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="551" height="690" src="https://renor.it/wp-content/uploads/2025/06/image-3.webp" alt="" class="wp-image-860" srcset="https://renor.it/wp-content/uploads/2025/06/image-3.webp 551w, https://renor.it/wp-content/uploads/2025/06/image-3-240x300.webp 240w" sizes="auto, (max-width: 551px) 100vw, 551px" /></figure>

<h3 class="wp-block-heading">Sharp</h3>

<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="559" height="692" src="https://renor.it/wp-content/uploads/2025/06/image-4.webp" alt="" class="wp-image-862" srcset="https://renor.it/wp-content/uploads/2025/06/image-4.webp 559w, https://renor.it/wp-content/uploads/2025/06/image-4-242x300.webp 242w" sizes="auto, (max-width: 559px) 100vw, 559px" /></figure>

<h2 class="wp-block-heading">Conclusion</h2>

<p>The Laplacian variance proves to be a simple but extremely powerful and effective tool for assessing the sharpness of an image in real-time client-side contexts. Its strength lies in its computational speed, absence of external dependencies and consistency with human visual perception.   </p>

<p>In our use case, applied to the CeliachIA App, this metric allowed us to anticipate quality control directly on the user&#8217;s device, avoiding unnecessary costs of cloud invocations when images are not suitable for OCR analysis.  </p>

<p>The pipeline built with JavaScript and Canvas API, coupled with a simple Web interface developed with Tailwind, is a concredo example of how mathematical concepts and engineering tools can come together in a robust, cost-effective, and user-friendly solution.  </p>

<p>Ultimately, having an automated sharpness scoring system is not just a technical quirk, but an essential component of ensuring quality, efficiency, and sustainability in modern image analysis pipelines.  </p>

<p>You can download this tool from my GitHub account at the link: <a href="https://github.com/thesimon82/Laplacian-Blur-Detector/">https://github.com/thesimon82/Laplacian-Blur-Detector/</a></p>

<p>[starbox]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/evaluation-of-the-level-of-blur-by-laplacian-variance/">Evaluation of the level of Blur by Laplacian Variance</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>“It takes AI!” &#8211; When an if() would suffice.</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/it-takes-ai-when-an-if-would-suffice/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Mon, 09 Jun 2025 18:14:25 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[AI costs]]></category>
		<category><![CDATA[AI washing]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[backend development]]></category>
		<category><![CDATA[buzzword]]></category>
		<category><![CDATA[coding]]></category>
		<category><![CDATA[design sobriety]]></category>
		<category><![CDATA[deterministic algorithms]]></category>
		<category><![CDATA[efficient scheduling]]></category>
		<category><![CDATA[emerging technologies]]></category>
		<category><![CDATA[engineering common sense]]></category>
		<category><![CDATA[fashion technology]]></category>
		<category><![CDATA[GPT]]></category>
		<category><![CDATA[if statement]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[overengineering]]></category>
		<category><![CDATA[PDF automation]]></category>
		<category><![CDATA[regex]]></category>
		<category><![CDATA[software design]]></category>
		<category><![CDATA[software engineering]]></category>
		<category><![CDATA[software engineering ethics]]></category>
		<category><![CDATA[startup tech]]></category>
		<category><![CDATA[technical solutions]]></category>
		<category><![CDATA[Technology innovation]]></category>
		<guid isPermaLink="false">https://renor.it/it-takes-ai-when-an-if-would-suffice/</guid>

					<description><![CDATA[<p>The era of AI washing Artificial intelligence is, without a shadow of a doubt, one of the most important technological developments and “discoveries” of our time. I use the term “breakthroughs” in quotation marks because in fact, the theory behind AI dates all the way back to 1956, the year of the famous Dartmouth conference [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/it-takes-ai-when-an-if-would-suffice/">“It takes AI!” &#8211; When an if() would suffice.</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">The era of AI washing</h2>

<p>Artificial intelligence is, without a shadow of a doubt, one of the most important technological developments and “discoveries” of our time. I use the term “breakthroughs” in quotation marks because in fact, the theory behind AI dates all the way back to 1956, the year of the famous Dartmouth conference that sanctioned its official birth.   <br />Yet only in recent years has AI burst into the consumer and managerial universe, becoming a magic word good for every context: marketing, products, investor slides, company reports, etc.  </p>

<p>The point is, if yesterday the fashion was the iPhone, today the fashion is artificial intelligence. Everyone wants to insert it everywhere, even where there is no need to integrate it. Sometimes, in fact, it is not only useless: it is less efficient, more fragile and more expensive to maintain than a classical finite-state algorithm. But it matters little: the important thing is to say that AI is there.   </p>

<p>To better explain what I mean, I want to start with a glaring and absolutely real example.  </p>

<p><strong>AI? No, just a while is enough: the case of the intelligent PDF </strong></p>

<p>A short time ago for a staffing company we have as a client, I was asked to develop something to help them send all the CUs. Speaking of thousands of employees you can imagine the time it would have taken from the poor administrative workers to manually unpack a PDF containing all the CUs of all the employees, and send them one by one manually.   </p>

<p>Let us then imagine that we have a textual PDF, containing thousands of Single Certifications, of each employee. Thousands of multiple documents from consecutive users. Each user has his CU that can have 8, 10, 12 pages and we want to divide it into as many PDFs as there are employees.    </p>

<p>What would a developer do?<br />First it would open the file to be split, analyze it, and notice that the first page of each user&#8217;s CU contains the tax code in the header. Good: there is a pattern! <br />What do you think then?</p>

<p><em>“Great! I write a script with a while that loops each page of the document starting from the first to the last, inside the loop I insert a regex that looks for a tax code, I validate it to make sure that what it fished for me is indeed a tax code and if the validation is successful, I mark that page as the start of a new CU of the employee. Continuing to process the other pages, as soon as I find another one, I save the whole previous block in a new PDF and send it as an attachment to an email extracting the employee&#8217;s data from the DB using as a search parameter just the tax code that I have in my nice variable and continue like that to the end.”  </em></p>

<p>Elegant solution. Linear. Readable. Reliable and above all very fast!   <br />No AI. Only structured logic, deterministic programming, and pattern analysis.   </p>

<p>Yet they wanted AI: “But can&#8217;t we use AI to automatically identify where each document starts?”</p>

<p>Of course you can use AI, but using AI to figure out from which page to which page a CU starts and ends is like equipping yourself with a laser cannon to kill an ant. Also, AI has very long inference times, compared to the deterministic algorithm that processes huge masses of data in seconds. </p>

<h2 class="wp-block-heading">But what about the costs?</h2>

<p>Of course! There are also costs! <br />Processing a document with thousands of employee CUs and tens of thousands of pages in addition to being time-consuming has costs that can be anything but insignificant. Depending on the model, one could spend even more than 500 euros to process tens of thousands of PDF pages.   </p>

<h2 class="wp-block-heading">Determinism vs. AI: two approaches compared </h2>

<p>Once it is clear that in our PDF example the simplest and most effective solution is deterministic, it is worth reflecting more generally on when it makes sense to use a logical, predictable approach and when it may make sense to introduce an artificial intelligence layer.  </p>

<h3 class="wp-block-heading">The deterministic approach</h3>

<p><strong>Logical, transparent, traceable</strong></p>

<p>A deterministic algorithm is composed of explicit rules and predictable behavior. For every known input, the output is guaranteed and replicable. It does not learn, but does exactly what you tell it to do.    </p>

<p>In the case of our PDF, the structure is clear because we have the fixed pattern of the tax code, variability is practically zero, and the target is logically definable. In these cases the deterministic approach has enormous advantages: it has a lot of computational efficiency and can therefore run successfully even on devices with minimal resources. It is reliable, easily testable, and can be debugged immediately. The whole flow is understandable even by a team not experienced in Machine Learning. And that is why going beyond this solution <strong>because we want AI</strong> is not only unwarranted but also counterproductive.      </p>

<h3 class="wp-block-heading">The AI-driven approach</h3>

<p><strong>Flexible but poorly predictable, expensive, and not always justifiable.  </strong></p>

<p>When we talk about AI we often mean systems capable of learning from examples (machine learning), generalizing patterns even in not perfectly defined contexts, handling ambiguity and noise in data. All of this is useful if and only if the data do not follow rigid patterns or patterns not known a priori, the context is chaotic, e.g., OCR on photos taken from smartphones that have unpredictable variability in quality, or lastly, explicit logic fails because the edge cases are infinite or otherwise unpredictable.   </p>

<p>In our PDF example, there is no ambiguity, no noise: using a neural network to identify the tax code is wasteful, both technically and economically.  </p>

<p>All this, however, does not stop certain companies, driven more by fashion than engineering, from coming up with solutions like, &#8220;let&#8217;s train an NLP model to recognize headers or let&#8217;s use GPT to segment documents, or let&#8217;s use a supervised classifier to figure out whether a document is the beginning or not&#8230; all to&#8230; <strong>Do what a preg_match()</strong> solves in one line of code.  </p>

<p>Let us get our feet back on the ground and remember that AI for when impressive is only a tool. There is no such thing or at any rate, at least for a developer, there should be no war between deterministic approach and AI. They are both tools&#8230; But engineering logic teaches us to choose <strong>the simplest, most effective, and most justified tool for the problem it needs to solve</strong>. AI makes sense where determinism does not reach: computer vision on fuzzy images, speech recognition, predictions based on complex time series, deep semantic analysis. But where there is a rule, a predictable structure, a valid logic, AI is not only useless: it is noise, complication and cost.       </p>

<h2 class="wp-block-heading">Intelligence is not artificial, it is by design  </h2>

<p>Artificial intelligence is an extraordinary tool, but like all powerful tools, it must be wielded judiciously. It is not a magic wand to wave over every problem, nor is it a key to slip into pitches to impress investors.   </p>

<p>There is a fundamental difference between <strong>doing innovation</strong> and <strong>pretending innovation</strong>. Today, too many solutions are thought of starting with the tool and not the problem.   <br />The result is over-engineered systems that are expensive, fragile, often also unresponsive and ultimately useless.  </p>

<p>Excuse me so much, but if you go into a company to solve problems for it and make it technologically efficient, the first thing you do is what? See where you can apply Artificial Intelligence? Or understand where the data comes from, how it moves, study how the company works, identify machinic processes that can be automated, and then propose solutions based on the problems found?  </p>

<p>I would say with my eyes closed the second one!</p>

<p>In the real world, good engineering is that which solves the problem in the simplest, most elegant and sustainable way possible. Even if it doesn&#8217;t make the news. Even if it doesn&#8217;t say AI-powered.  </p>

<p>The future of computing will not be dominated by those who stick neural models everywhere, but by those who know how to choose when they really need to. By those who have the courage to say, <em>“No, an if() is enough here.”</em> </p>

<p>Perhaps it is from this technical sobriety that a new idea of competence can be reborn: one that is measured not in buzzwords but in design clarity, efficiency, and accountability.  </p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/it-takes-ai-when-an-if-would-suffice/">“It takes AI!” &#8211; When an if() would suffice.</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Sexy according to algorithm: the new aesthetic created by those who watch us</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/sexy-according-to-algorithm-the-new-aesthetic-created-by-those-who-watch-us/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Fri, 06 Jun 2025 13:07:40 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[aesthetic models]]></category>
		<category><![CDATA[aesthetics]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI image]]></category>
		<category><![CDATA[AI prompt]]></category>
		<category><![CDATA[AI-generated models]]></category>
		<category><![CDATA[algorithm]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[attraction]]></category>
		<category><![CDATA[beauty]]></category>
		<category><![CDATA[collective imagination]]></category>
		<category><![CDATA[computer vision]]></category>
		<category><![CDATA[data]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[desire]]></category>
		<category><![CDATA[digital aesthetics]]></category>
		<category><![CDATA[digital pgotography]]></category>
		<category><![CDATA[femininity]]></category>
		<category><![CDATA[generative art]]></category>
		<category><![CDATA[Monica Bellucci]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[objectivity]]></category>
		<category><![CDATA[sex appeal]]></category>
		<category><![CDATA[sexy]]></category>
		<category><![CDATA[society and beauty]]></category>
		<category><![CDATA[subjectivity]]></category>
		<category><![CDATA[symmetry]]></category>
		<category><![CDATA[visual culture]]></category>
		<category><![CDATA[woman generated by AI]]></category>
		<guid isPermaLink="false">https://renor.it/sexy-according-to-algorithm-the-new-aesthetic-created-by-those-who-watch-us/</guid>

					<description><![CDATA[<p>This woman does not exist. Yet you can&#8217;t stop looking at her. This will be a non-technical article for those who are approaching or want to know more about artificial intelligence. Within a few years, AI has learned to generate images of women that are so realistic, seductive and perfectly composed that they can be [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/sexy-according-to-algorithm-the-new-aesthetic-created-by-those-who-watch-us/">Sexy according to algorithm: the new aesthetic created by those who watch us</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">This woman does not exist. Yet you can&#8217;t stop looking at her.   </h2>

<p>This will be a non-technical article for those who are approaching or want to know more about artificial intelligence. Within a few years, AI has learned to generate images of women that are so realistic, seductive and perfectly composed that they can be mistaken for real models. But how does a machine, devoid of emotions, desires or sensory experience, <em>understand</em> what it means to “be sexy” to the point of actually generating an image that evokes this status?  </p>

<h2 class="wp-block-heading">The answer is in the data</h2>

<p>AI does not wish to &#8230; It calculates; and to do so, it has seen millions of images, from Renaissance portraits to selfies on Instagram, from early Playboy covers to fashion photographs. Every curve, every pose, every look, has been broken down into vectors, metrics, weights and statistical correlations. When you ask the AI to create a photo of a sexy woman, the algorithm doesn&#8217;t invent anything, it merely reassembles the global average desire.     </p>

<p>The picture we get is therefore an average of traits, physical proportions, breast size, makeup, etc. that fall within the average desire of people in the world.  </p>

<p>This obviously raises a philological issue&#8230;</p>

<h2 class="wp-block-heading">Who decides what is sexy and what is not?</h2>

<p><em>“<strong>Beauty is subjective</strong>,”</em> how many of you have never heard this phrase? It is said so many times that it is now considered to be self-evident, true no matter what.   </p>

<p>I disagree! Because if beauty were truly subjective, then we would also have to accept the idea that a person universally recognized as beautiful could be called ugly. But no. There are limits, even to subjectivity.   </p>

<p>We take our Monica Bellucci at the height of her beauty.</p>

<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="674" height="1000" src="https://renor.it/wp-content/uploads/2025/06/image.webp" alt="Monica Bellucci" class="wp-image-725" style="aspect-ratio:16/9;object-fit:contain" srcset="https://renor.it/wp-content/uploads/2025/06/image.webp 674w, https://renor.it/wp-content/uploads/2025/06/image-202x300.webp 202w" sizes="auto, (max-width: 674px) 100vw, 674px" /></figure>

<p>If beauty were subjective this would mean that there would have to be someone in the world capable of believing that Bellucci could be given the appellation “Ugly.”  </p>

<p>I understand that there may be people who, whether out of personal preference or dislike, may <strong>prefer</strong> other types of beauty. There are those who may prefer a blonde, there are those who may prefer smaller or even larger breasts, but to call Bellucci as “Ugly” may only be a sign of envy (if pronounced by females) or dislike (if pronounced by males) but in reality both know that Monica Bellucci is anything but ugly.   </p>

<p>By this reasoning I derive that the subjectivity of aesthetics exists with certain limitations. We brought the example of Bellucci, but we could have talked about Angelina Jolie, Cindy Crawford, Naomi Campbell, Charlize Theron, as for other male aesthetic reference points such as Gabriel Garko, Paul Newman, Alain Delon, Henry Cavill, etc. etc. True aesthetics is in my opinion something objective: the “beautiful” is beautiful by definition.   </p>

<p>Then there is also a statistical reason&#8230;. If 99% of the world&#8217;s population identifies aesthetic sense in a person, then that person can be said to retain objective <strong>aesthetic characteristics</strong>.   </p>

<h2 class="wp-block-heading">Beauty according to AI</h2>

<p>Who, then, decides what is sexy when it is a machine watching us that creates it? We are the ones who decide it. We are the ones who educate the algorithm, and the AI merely returns a mirror to us: it shows us the traits of what is statistically considered sexiest by most of the world. An AI-generated woman is a collective portrait of our unconscious taste.     </p>

<p>Beauty, they used to say, is in the eye of the beholder. But today those eyes are digital and learn fast! </p>

<p>Sexy According to Algorithm is a journey into a new aesthetic: made of prompts, neural networks, and desires that we can no longer distinguish from reality.  </p>

<p>How about you? Do you really think beauty is subjective? Or has artificial intelligence revealed something we don&#8217;t want to admit?  </p>

<p>Because this woman does not exist, but she still seduces us.</p>

<p>[starbox]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/sexy-according-to-algorithm-the-new-aesthetic-created-by-those-who-watch-us/">Sexy according to algorithm: the new aesthetic created by those who watch us</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>The AI that knows everything about you</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/the-ai-that-knows-everything-about-you/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Fri, 30 May 2025 15:29:36 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[AI investigation]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Automated research]]></category>
		<category><![CDATA[Behavioral analysis]]></category>
		<category><![CDATA[Business scouting]]></category>
		<category><![CDATA[Candidate evaluation]]></category>
		<category><![CDATA[Career]]></category>
		<category><![CDATA[ChatGPT]]></category>
		<category><![CDATA[deep search]]></category>
		<category><![CDATA[Enneatype]]></category>
		<category><![CDATA[Future of work]]></category>
		<category><![CDATA[generative AI]]></category>
		<category><![CDATA[GPT-4]]></category>
		<category><![CDATA[HR tools]]></category>
		<category><![CDATA[MBTI]]></category>
		<category><![CDATA[online reputation]]></category>
		<category><![CDATA[openAI]]></category>
		<category><![CDATA[profiling AI]]></category>
		<category><![CDATA[prompt engineering]]></category>
		<category><![CDATA[Self-analysis]]></category>
		<category><![CDATA[Staff selection]]></category>
		<category><![CDATA[Tools for recruiters]]></category>
		<guid isPermaLink="false">https://renor.it/the-ai-that-knows-everything-about-you/</guid>

					<description><![CDATA[<p>Today we discover how artificial intelligence can, through a “Deep Search”, return in just a few minutes a complete, structured, and professional overview of anyone — including skills, career, online presence, strengths, and weaknesses — in an automated way and without having to spend hours on manual research. Introduction Until recently, obtaining a detailed profile [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/the-ai-that-knows-everything-about-you/">The AI that knows everything about you</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>Today we discover how artificial intelligence can, through a “Deep Search”, return in just a few minutes a complete, structured, and professional overview of anyone — including skills, career, online presence, strengths, and weaknesses — in an automated way and without having to spend hours on manual research. </p>

<h2 class="wp-block-heading">Introduction</h2>

<p>Until recently, obtaining a detailed profile of a person — whether a potential partner, candidate, investor, or competitor — required hours of research across LinkedIn, Google, articles, résumés, social networks, corporate databases, and more. Today, thanks to Generative Artificial Intelligence and in particular the “Deep Search” feature available in ChatGPT and other platforms, this process can be automated in a surprisingly effective way. </p>

<p>This article explains step by step how this technology works, what advantages it offers compared to traditional research methods, which precautions to take to obtain useful results, and how to integrate it into your workflows that, as of now (May 2025), cannot yet be fully automated.</p>

<h2 class="wp-block-heading">What is Deep Search</h2>

<p>Deep Search is a set of prompt engineering and contextualization techniques that allows ChatGPT to generate a detailed and multidimensional analysis of a subject (person, company, or any other search entity), producing a structured response that includes, in the case of individuals for example, biographical, behavioral, psychological, professional, and communicative aspects. </p>

<p>For example, starting from a prompt like: </p>

<p><em>&#8220;Analyse Mario Rossi (mariorossi.com)&#8221;</em></p>

<p>It can return: a summary of the professional profile, strengths and weaknesses, the MBTI (Myers-Briggs Type Indicator), the psychological enneatype, decision-making style, a SWOT analysis, the predominant communication tone, a pitch to present to investors, and an estimate of perceived online reputation. </p>

<p>All of this is generated based on information available in the model’s training data, inferences drawn from common behavioral patterns, and internal coherence with the initial prompt. </p>

<p>It is evident that an analysis of this kind is only possible when there is online material available for the artificial intelligence to draw from. In any case, this activity does not constitute a violation of the individual’s privacy, precisely because it is based exclusively on publicly available data already accessible on the web.  </p>

<h3 class="wp-block-heading">Difference Between a Simple Prompt and Contextual Deep Search</h3>

<p>In the world of generative artificial intelligence, not all prompts are created equal. A simple prompt is a direct request, often formulated in a single sentence, that produces an instant but generally superficial response. In contrast, <strong>contextual Deep Search</strong> represents a far more refined and powerful approach: it is based on a guided, layered conversation in which the user gradually builds context, steering the AI toward a deep and coherent analysis. It’s like the difference between asking, “Who is Mario Rossi?” and requesting, “Build a professional and psychological profile of Mario Rossi, CEO in the fintech sector, analyzing strengths, weaknesses, decision-making style, and communication impact.”<br/>The latter approach, which can be activated by enabling the appropriate switch, is what we refer to as Deep Search. It doesn’t just list facts — it interprets, connects, and deduces, offering a complex synthesis that blends biography, personality traits, and developmental potential. It’s the closest thing to a well-conducted imaginary interview, performed with method and intelligence.      </p>

<h3 class="wp-block-heading">The Concept of Inference from Partial Data</h3>

<p>One of the most fascinating aspects of next-generation artificial intelligence is its ability to infer complex information even from minimal or incomplete data. This process, known as <strong>inference from partial data</strong>, is based on the model’s capacity to recognize recurring patterns within vast sets of knowledge acquired during training. In other words, the AI doesn’t need to “know everything” to reconstruct a plausible picture: a few clues — a name, a role, a company — are enough to trigger a deductive mechanism capable of generating a coherent and structured representation of the person or context being examined. This is not imagination, but plausibility built upon statistical logic and probabilistic relationships between pieces of information.    </p>

<p>This makes artificial intelligence not just a consultative tool, but a cognitive synthesizer capable of simulating deep knowledge even when the starting point is highly fragmented. </p>

<h3 class="wp-block-heading">How ChatGPT Combines Known Sources, Patterns, and Deductive Logic</h3>

<p>Let’s begin by noting that, unlike traditional prompts which rely solely on information stored within the model, ChatGPT’s Deep Search function leverages an incredibly powerful additional component: real-time web browsing. When Deep Search is activated, the AI accesses the internet directly through Bing, searching for up-to-date and relevant sources in order to build a response based not only on statistical inference, but also on current data, official statements, social media profiles, news articles, and authoritative sources. This allows it to constantly update and refine the profile being analyzed, by combining known sources, recurring patterns, and deductive logic. The result is a deeply contextualized representation, merging the immediacy of generative intelligence with the accuracy of real-time web-based documentation, maintaining a balance between deduction and verification. It is this synergy between AI and intelligent web crawling that makes Deep Search an unprecedented tool for gathering strategic insights about people, companies, or phenomena in real time.     </p>

<h2 class="wp-block-heading">In What Contexts Can an AI-Generated Profile Be Useful?</h2>

<p>Artificial Intelligence has no emotions and simply provides objective analyses based on the information it finds online. Among the contexts in which it can prove useful are:  </p>

<h3 class="wp-block-heading">When we use it on a personal level</h3>

<p>Deep Search analysis can prove extremely valuable on a personal level. Asking artificial intelligence for a neutral and contextualized evaluation of your own profile allows you to observe yourself from an external, objective perspective, free from emotional bias. This form of assisted self-reflection can help bring to light strengths that are often underestimated, but more importantly, it can clearly identify areas for improvement, dysfunctional behaviors, or limiting patterns that may hinder personal or professional growth. It’s like holding up a mirror to yourself — but with the analytical objectivity of a tool that doesn’t judge: it observes, processes, and offers constructive insights.    </p>

<p>Of course, one must be prepared to receive criticism as well — feedback that may highlight underappreciated aspects or even vulnerabilities we tend to overlook. Ultimately, it’s important to understand that an analysis conducted by a system unaffected by emotional ties does not deliver a judgment, but rather offers something that can be useful for our personal growth. It requires clarity of mind, openness, and a healthy sense of self-criticism.  </p>

<h3 class="wp-block-heading">When Used in the Intermediate Phase of Job Interviews</h3>

<p>In the selection process for engineering roles, where recruiting is structured into multiple phases, the introduction of Deep Search can serve as a valuable tool during the intermediate stage. After the résumé screening and the initial interview, and before the technical assessment, it is possible to insert a phase of in-depth analysis powered by artificial intelligence. At this point, the recruiter can provide ChatGPT with the information gathered during the previous interview (such as responses given, observed behaviors, or preliminary impressions), enriching it with real-time web research. The model is then able to deliver a neutral, contextual, and well-reasoned evaluation of the candidate, highlighting the consistency between their online presence and personal communication, potential soft skills not explicitly stated, relational style, and cultural compatibility with the company. This approach does not replace human judgment, but rather complements it with an objective analytical lens, supporting more informed and well-rounded decision-making.     </p>

<h3 class="wp-block-heading">When We Want to Use It in Investigative, Legal, or Academic Applications</h3>

<p>In addition to its use in personal, HR, and commercial contexts, the Deep Search function also finds valuable applications in investigative, legal, and academic settings. In the investigative field, it can be employed to build preliminary profiles of subjects of interest, reconstructing connections, past activities, digital traces, and publicly documented behaviors — all without accessing confidential archives or infringing on privacy. In the legal domain, it can assist lawyers or technical consultants in the contextual gathering of public information on opposing parties, witnesses, experts, or companies involved in proceedings, providing a concise yet structured overview that helps shape defense or negotiation strategies. On the academic side, Deep Search proves useful for quickly analyzing the profiles of authors, scholars, or researchers, generating basic bibliographic overviews or comparing theoretical approaches across different schools of thought. Naturally, in all these cases, it is essential to remember that the AI merely reprocesses publicly accessible information, delivering plausible and structured interpretations — but not certified or legally admissible evidence. It is a tool for support and orientation, not a substitute for official sources.      </p>

<h2 class="wp-block-heading">Can I ask for more?</h2>

<p>Absolutely yes — once the search has been carried out, it is possible to request any other type of information that can be inferred from the completed deep search. </p>

<p>For example: <em>“Provide me with a SWOT analysis of Mario Rossi.”.</em> </p>

<p>The model will then elaborate strengths, weaknesses, opportunities, and threats in relation to the professional context, current position, and declared or inferred skills. But it doesn’t stop there: you can go further by requesting a leadership style assessment, a behavioral interview simulation, a pitch to present to potential investors, or even a cultural compatibility analysis with a specific company.  </p>

<p>This makes Deep Search not just an informational tool, but a strategic dialogue environment, where each piece of information becomes the starting point for a new inference, a new perspective, a new use case — fully customizable according to the objectives of the user. </p>

<h2 class="wp-block-heading">Conclusions</h2>

<p>The introduction of Deep Search in AI interactions represents a significant leap forward in the everyday use of generative models. It is no longer about receiving simple answers to isolated questions, but rather about obtaining truly structured analyses, based on a combination of real-time web research, deductive inference, and cognitive modeling. Whether it’s for personnel selection, business scouting, investigative insights, self-analysis, or academic support, Deep Search allows us to save time, increase depth, and broaden the perspective through which we understand individuals and contexts. Like any powerful tool, it requires conscious and responsible use, fully aligned with current ethical and legal frameworks. But for those who know how to ask the right questions — and have the courage to hear even the most uncomfortable answers — AI can become an extraordinary ally in understanding both the world and oneself.    </p>

<p>[starbox]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/the-ai-that-knows-everything-about-you/">The AI that knows everything about you</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI, CV, and Spectroscopy for Waste Management – Part 2</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/ai-cv-and-spectroscopy-for-waste-management-part-2/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Mon, 19 May 2025 09:46:08 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[computer vision]]></category>
		<category><![CDATA[electronics]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[mechatronics]]></category>
		<category><![CDATA[open-source]]></category>
		<category><![CDATA[prototyping]]></category>
		<category><![CDATA[Raman spectroscopy]]></category>
		<category><![CDATA[robotic arm]]></category>
		<category><![CDATA[smart recycling]]></category>
		<category><![CDATA[sustainability]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[waste sorting]]></category>
		<guid isPermaLink="false">https://renor.it/ai-cv-and-spectroscopy-for-waste-management-part-2/</guid>

					<description><![CDATA[<p>Automating Waste Sorting: From Theory to Practice with Mechatronics In the previous article, we left off with a realistic outline of the application logic for automated waste sorting. Now it’s time to start understanding concretely how this system can become automated. Let’s now enter the fascinating world of mechatronics and robotics. To save time in [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/ai-cv-and-spectroscopy-for-waste-management-part-2/">AI, CV, and Spectroscopy for Waste Management – Part 2</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Automating Waste Sorting: From Theory to Practice with Mechatronics</h2>

<p>In the previous article, we left off with a realistic outline of the application logic for automated waste sorting. Now it’s time to start understanding concretely how this system can become automated. Let’s now enter the fascinating world of mechatronics and robotics.</p>

<p>To save time in manually sorting waste, the first thing I envision is a container where various scraps can be freely thrown in. From there, each piece of waste will need to be automatically transferred to the chamber where the analysis through Raman spectroscopy will take place.</p>

<p>There are various solutions for transporting the waste, and here a crucial decision for the project arises: do we want to aim for a practical compromise, or do we wish to handle every single type of waste with absolute precision? If we choose the simpler route and accept some compromises—such as limiting ourselves to rigid containers like cartons and bottles—a conveyor belt equipped with paddles might be sufficient.</p>

<p>If, on the other hand, we opt for a more versatile solution—one that allows us to identify and handle any object regardless of its shape or size—then it will be necessary to implement a robotic arm equipped with computer vision, capable of recognizing and selecting each item before placing it on the conveyor belt.</p>

<p>For this project—and also with the goal of creating a truly eco-friendly product—we choose this latter solution. However, we decide to exclude the handling of organic waste, in order to avoid additional technical complications such as the need for continuous washing of the conveyor belt, which would also require the electronic components to be designed with a protection rating of at least IP65.</p>

<h2 class="wp-block-heading">Device mechanics</h2>

<p>Let’s imagine having an initial bin containing different types of waste. A robotic arm, equipped with a camera, visually analyzes the waste, identifies it, picks it up, and places it on a conveyor belt. This conveyor belt transports each object into a dark chamber where our Raman spectroscope is positioned. The spectroscope identifies the material and, based on the result, activates a path selector: plastic will end up in the plastic bin, glass and metal in their respective containers, and paper in the bin dedicated to paper.</p>

<figure class="wp-block-image size-large"><img decoding="async" src="https://renor.it/wp-content/uploads/2025/05/schema3-1024x683.webp" alt="" class="wp-image-528"/></figure>

<h2 class="wp-block-heading">A brief personal note</h2>

<p>When I started describing this project, it seemed relatively simple and straightforward. However, after careful consideration, it turned out to be anything but simple. It’s fascinating to note how a task that seems so immediate and simple in everyday life actually requires a complex combination of technical knowledge.</p>

<p>Along this journey, we will explore together how to tackle these challenges and transform an initial idea into a truly functional and useful final product—open-source and accessible to everyone, provided they have the necessary equipment to create the parts required to bring the product to life. </p>

<p>In the next article, we will cover the design and 3D printing of the various components that will make up our transport element.</p>

<p>[starbox]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/ai-cv-and-spectroscopy-for-waste-management-part-2/">AI, CV, and Spectroscopy for Waste Management – Part 2</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI, CV, and Spectroscopy for Waste Management &#8211; Part 1</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/ai-cv-and-spectroscopy-for-waste-management-part-1/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Mon, 19 May 2025 08:03:42 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[3D modeling]]></category>
		<category><![CDATA[3D printing]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[automation]]></category>
		<category><![CDATA[computer vision]]></category>
		<category><![CDATA[environmental sustainability]]></category>
		<category><![CDATA[GitHub]]></category>
		<category><![CDATA[Google Cloud Vision]]></category>
		<category><![CDATA[innovation]]></category>
		<category><![CDATA[materials]]></category>
		<category><![CDATA[mechatronics]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[OCR]]></category>
		<category><![CDATA[open-source project]]></category>
		<category><![CDATA[prototyping]]></category>
		<category><![CDATA[Raman spectroscopy]]></category>
		<category><![CDATA[smart recycling]]></category>
		<category><![CDATA[technology]]></category>
		<category><![CDATA[waste sorting]]></category>
		<guid isPermaLink="false">https://renor.it/ai-cv-and-spectroscopy-for-waste-management-part-1/</guid>

					<description><![CDATA[<p>Do you also find waste sorting boring? I bet it’s happened to you too: after a dinner with friends or a family lunch, the time comes to clean up, and suddenly waste sorting is there, waiting to eat up your time. Paper in one bin, plastic in another, glass and metal in a different container, [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/ai-cv-and-spectroscopy-for-waste-management-part-1/">AI, CV, and Spectroscopy for Waste Management &#8211; Part 1</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">Do you also find waste sorting boring?</h2>

<p>I bet it’s happened to you too: after a dinner with friends or a family lunch, the time comes to clean up, and suddenly waste sorting is there, waiting to eat up your time. Paper in one bin, plastic in another, glass and metal in a different container, and organic waste in the smallest one… A task that often becomes tedious—especially when the amount of waste to sort increases significantly.  </p>

<p>Hence the question… Is there a way to make all of this simpler, faster, and even automatic? Can today’s technology give us real, practical help in managing this everyday task, saving us both time and effort?  </p>

<h2 class="wp-block-heading">How a Technological Project Is Born: From Idea to Final Product</h2>

<p>During this period, with the heavy workload I’m currently managing, it will be difficult to find enough time to dedicate to this project. However, I’ve decided to use the few free moments I have in the evening after dinner to carry it forward and share it with you step by step.  </p>

<p>Beyond the intrinsic usefulness of this project, what interests me most is showing you how, starting from an initial idea, one can concretely reach the final product. I want to take you through the stages involved in the realization of a project that, at first glance, may seem simple in theory, but is in fact extremely complex—because it requires a broad range of skills that go far beyond just writing code.  </p>

<p>To complete this work, in fact, one needs in-depth knowledge in several fields: from 3D modeling to 3D printing, from prototyping to electronics, all the way to expertise in physics, mathematics, and above all, artificial intelligence. </p>

<p>In this article (and in those that will follow), I will try to guide you through this entire journey, sharing with you the challenges, obstacles, and solutions I encounter along the way. It will be an interesting way to discover together how an innovative idea can be transformed into something truly functional, and we will share it as an open-source project on my GitHub channel.  </p>

<h2 class="wp-block-heading">Let’s assess the initial critical issues.</h2>

<p>We all rely on our senses to explore and interpret the world around us. When we hold an object in our hands, we can observe it, touch it, smell it, or even tap it lightly to hear the sound it makes. Sometimes, we might even taste it to perceive its flavor. We can think of our senses as gateways that allow us to connect and interact with the world.    </p>

<p>Imagine receiving a bottle: just by looking at it or, at most, holding it in your hands, you can immediately tell whether it’s made of glass or plastic. If, on the other hand, someone handed you a can of peeled tomatoes, you’d likely notice the distinctive metallic color, understanding that it’s made of metal. And if you weren’t sure, you might squeeze it to check whether it stays deformed—because you know that metal, when compressed, tends to retain its shape, unlike plastic, which—within certain limits—returns to its original form.  </p>

<p>All of this is possible thanks to the cognitive experience we have developed throughout our lives. </p>

<p>But in the case of a machine… how can we transfer this ability to understand? How can it learn to identify the material an object is made of?  </p>

<h2 class="wp-block-heading">Let’s emulate the senses.</h2>

<p>There are several approaches…</p>

<p>One of the most fascinating approaches is based on Computer Vision, a technology that emulates the human sense of sight.</p>

<p>This approach involves using specialized artificial neural networks that are trained to recognize various objects, materials, and shapes. It’s exactly the same principle that allows modern self-driving cars (without mentioning any well-known brands) to navigate safely on our roads.  </p>

<p>These vehicles are equipped with cameras that continuously analyze everything around them—hundreds of times per second: roads, traffic signs, pedestrians, other vehicles, and potential obstacles. The neural network, carefully trained over months or even years, enables the vehicle to identify each object and respond appropriately: staying within its lane, strictly obeying traffic signs, stop signals, and right-of-way rules, avoiding collisions, and calculating the safest escape route in case of danger.  </p>

<h2 class="wp-block-heading">The first obstacle: not all neural networks are easy to train</h2>

<p>If you’ve followed the discussion this far, you’ve surely noticed an important detail: we just mentioned neural networks trained for months, if not years. And this is where the first real challenge of our project arises.  </p>

<p>The issue, in fact, is far from simple. We are not teaching a machine to recognize a pedestrian—something with a head, two arms, and two legs. Here, we need to train a neural network to understand exactly what material a piece of waste is made of. And the difficulty increases when we consider that the same product—like milk, for example—can be packaged in a plastic bottle by one brand, in a glass bottle by another, or in a Tetra Pak by yet another.    </p>

<p>If we wanted to train a neural network to accurately recognize all packaging materials for every single food item on the market, it would mean collecting and cataloging tens of thousands—if not hundreds of thousands—of different samples. We would need to go over each brand and each product variation multiple times, generating enormous amounts of training data.  </p>

<p>It’s clear that this path is neither practical nor sustainable. We must therefore look for a smarter, more flexible, and scalable solution—one that allows the neural network to “generalize” everything that can be generalized, recognizing materials and objects based on general characteristics rather than those specific to each individual product.   </p>

<h2 class="wp-block-heading">Some reinforcement ideas</h2>

<p>Although it is difficult to train a neural network for certain types of waste, we can still submit the image of the waste to a pre-trained neural network. Google Cloud Vision, for example, allows image analysis and, in cases where the material is visually very recognizable, it can assign generic labels such as:  </p>

<ul class="wp-block-list">
<li>plastic</li>



<li>glass</li>



<li>metal</li>



<li>carboard</li>



<li>paper</li>
</ul>

<p>The problem is that it’s not 100% reliable—it doesn’t distinguish specific variations (e.g., Tetra Pak vs. plain cardboard). It can’t read materials based on texture or sound, as a human would by manipulating the product. </p>

<p>However, there is a realistic alternative solution. A more reliable approach could be to use Cloud Vision OCR to read the text on the product label, and then search online for information about the material (e.g., PET bottle, Tetra Pak packaging).  </p>

<h2 class="wp-block-heading">Let’s start outlining.</h2>

<p>To avoid the risk of forgetting anything, let’s start by drafting a mind map and a functional diagram so we can adjust and update it as we move forward with the project. </p>

<figure class="wp-block-image size-large"><img decoding="async" src="https://renor.it/wp-content/uploads/2025/05/Sistema-di-gestione-rifiuti-1024x314.webp" alt="" class="wp-image-515"/></figure>

<figure class="wp-block-image size-full ticss-a21b03e6"><img loading="lazy" decoding="async" width="431" height="752" src="https://renor.it/wp-content/uploads/2025/05/Sistema-divisione-rifiuti-1-1.webp" alt="" class="wp-image-518" style="object-fit:cover" srcset="https://renor.it/wp-content/uploads/2025/05/Sistema-divisione-rifiuti-1-1.webp 431w, https://renor.it/wp-content/uploads/2025/05/Sistema-divisione-rifiuti-1-1-172x300.webp 172w" sizes="auto, (max-width: 431px) 100vw, 431px" /></figure>

<h2 class="wp-block-heading">What if this solution doesn’t yield a reliable result?</h2>

<p>It’s possible that, despite all efforts, the system still doesn’t return a reliable and definitive result. So how do we handle that? </p>

<ol class="wp-block-list">
<li>We can place the product in a manual verification state (a solution I’m not particularly fond of).</li>



<li>We can classify the waste as non-recyclable (although this solution isn’t particularly eco-friendly either).</li>



<li>We can look for other ways to understand the material of the product.</li>
</ol>

<p>Let’s focus on the third solution.</p>

<h2 class="wp-block-heading">What other possibilities are there for identifying a material?</h2>

<p>A safe, fast, and real-time system for identifying a material is certainly spectroscopy—specifically, the Raman spectroscope. This represents one of the most reliable and precise techniques for identifying the material, particularly the chemical composition of an object, even in solid, liquid, or polymeric form.  </p>

<p>Raman spectroscopy is based on the interaction of a laser light, at a specific frequency, with the molecular vibrations of a material. When the laser hits a sample, a small portion of the light is scattered in an anomalous way (the Raman effect), and this scattered spectrum is characteristic of the molecular structure of the material.  </p>

<h2 class="wp-block-heading">What can a Raman spectroscope identify?</h2>

<p>It can successfully identify plastic, glass, paper, organic and inorganic compounds, and even Tetra Pak. However, it cannot identify metals because metals reflect light and therefore do not produce Raman signals. This may not be a problem, though, because by exclusion—if a material is not among the listed ones—it is almost certainly metal.   </p>

<p>There are, however, some challenges here as well… A Raman spectroscope can cost, depending on its quality, from a few thousand euros to tens of thousands. But there’s a solution to this too: building our own Raman spectroscope. We’ll explore this part later in a dedicated series of articles.     </p>

<p>Let’s update our diagrams accordingly…</p>

<figure class="wp-block-image size-full ticss-a21f80fc"><img loading="lazy" decoding="async" width="465" height="479" src="https://renor.it/wp-content/uploads/2025/05/Sistema-divisione-rifiuti-2.webp" alt="" class="wp-image-521" srcset="https://renor.it/wp-content/uploads/2025/05/Sistema-divisione-rifiuti-2.webp 465w, https://renor.it/wp-content/uploads/2025/05/Sistema-divisione-rifiuti-2-291x300.webp 291w" sizes="auto, (max-width: 465px) 100vw, 465px" /></figure>

<p>It seems that, in this way, we have correctly managed the application logic of the system. We have broadly understood how we need to proceed.  </p>

<p>Now there’s the entire section on waste transport and sorting, which we will address in the next article.</p>

<p>[starbox]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/ai-cv-and-spectroscopy-for-waste-management-part-1/">AI, CV, and Spectroscopy for Waste Management &#8211; Part 1</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Neural Networks in PHP? Yes, It Can Be Done!</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/neural-networks-in-php-yes-it-can-be-done/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Sun, 18 May 2025 10:23:19 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[AI in PHP]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[back-propagation]]></category>
		<category><![CDATA[backend programming]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[feed-forward network]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[microservices]]></category>
		<category><![CDATA[model serialization]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[php]]></category>
		<category><![CDATA[REST API]]></category>
		<category><![CDATA[sigmoid function]]></category>
		<category><![CDATA[Xavier initialization]]></category>
		<category><![CDATA[XOR example]]></category>
		<guid isPermaLink="false">https://renor.it/neural-networks-in-php-yes-it-can-be-done/</guid>

					<description><![CDATA[<p>I’ll begin this article with a premise… PHP is certainly not the ideal language when it comes to artificial intelligence. Neural networks are typically the domain of more scientific languages like Python, which offers optimized libraries for this purpose such as PyTorch and NumPy—and that’s usually the language I rely on when working in this [&#8230;]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/neural-networks-in-php-yes-it-can-be-done/">Neural Networks in PHP? Yes, It Can Be Done!</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[
<p>I’ll begin this article with a premise… PHP is certainly not the ideal language when it comes to artificial intelligence. Neural networks are typically the domain of more scientific languages like Python, which offers optimized libraries for this purpose such as PyTorch and NumPy—and that’s usually the language I rely on when working in this kind of context.   </p>

<p>However, a few months ago I was chatting with a friend over an aperitif, and as usual, we started throwing around a bunch of silly ideas about AI (I’ll spare you the wild ones we came up with). But then we focused on one specific topic: “Would it be possible to create a neural network in PHP?”<br />The short answer is yes—albeit with some limitations.  </p>

<p>To understand this article, we first need to make a few preliminary remarks…</p>

<h2 class="wp-block-heading">Introduction</h2>

<p>In today’s technological landscape, Artificial Intelligence (AI in English, or IA in Italian) is no longer a niche topic, but a strategic pillar for nearly any company aiming to extract value from its data, optimize processes, and offer truly competitive products.<br />In the field of HR &amp; Workforce Management, for instance, predictive models make it possible to anticipate sudden absences, dynamically calibrate shifts, and, ultimately, save time and resources—precisely the “speed/quality” combination we discussed in this article: <a href="https://renor.it/speed-and-quality-in-projects/?lang=en">https://renor.it/velocita-e-qualita-nei-progetti/</a>.<br /><br /><br />But how many of you actually know what artificial intelligence really is? </p>

<h2 class="wp-block-heading">AI – The Famed Yet Unknown</h2>

<p>The term Artificial Intelligence refers to the set of techniques that allow a computer system to exhibit abilities normally attributed to human intelligence: reasoning, learning, decision-making, and recognizing complex patterns.<br />Within this broad field, Machine Learning represents the approach in which learning takes place through statistical analysis of data, without having to manually code every rule.<br />In recent years, the rise of Deep Learning has pushed evolution even further: very deep neural networks, composed of dozens or even hundreds of processing layers, are able to detect structures in data that traditional models fail to capture.  </p>

<h2 class="wp-block-heading">What are neural networks used for?</h2>

<p>An artificial neural network is a mathematical model inspired by the structure of the neurons in our brain. After all, almost all human discoveries are based on observing what already exists in nature.  <br />The neural structure of the brain allows us to solve problems where the relationship between input (what we perceive from the outside) and output (what we derive from it) is non-linear or difficult to formalize. </p>

<p>Let’s imagine, for example, that we want to predict the likelihood of an employee being late based on variables such as traffic, weather conditions, personal history of delays and absences, and public transportation schedules. The relationship between these factors is far too complex to be described with a few conditional statements—but a well-trained neural network can learn it by analyzing a large amount of historical data.  </p>

<p>The reason for this ability lies in the fact that each connection between neurons is associated with a weight and a bias term. We will later understand what these are.  </p>

<p>During the training phase, the network adjusts these parameters to minimize a loss function that measures the discrepancy between its prediction and the actual value. This refinement process occurs through the backpropagation algorithm, which computes the gradients, and an optimization method such as stochastic gradient descent, which iteratively updates the weights.<br />At the end of the process, the network does not contain a set of rules written by the programmer, but rather a collection of numerical coefficients that encode, in a distributed manner, the knowledge extracted from the data.   </p>

<p>Now that we have a general overview, we can understand the roles and responsibilities of weights and biases.</p>

<p>The weight is the numerical coefficient that modulates the intensity with which an input signal contributes to the activation of the next neuron. We can think of it like a volume knob: turning it up amplifies the contribution of that specific feature, while turning it down reduces it, even to the point of inverting its effect.<br />From a mathematical standpoint, the weight multiplies the input value and determines the slope of the function the network is learning; a high weight indicates that the input is strongly correlated with the output, while a weight close to zero makes it practically irrelevant.   </p>

<p>The bias, on the other hand, acts as a translator: it is added to the product of input and weight, shifting the overall result up or down before the activation function is applied.<br />Therefore, if the weights—as we’ve seen—represent the slope of a line, the bias represents the y-intercept, allowing the network to model functions that do not necessarily pass through the origin.<br />In practice, the bias allows a neuron to activate even when all inputs are zero, introducing that flexibility which makes neural models true function approximators.   </p>

<p>During training, weights and biases are updated using the backpropagation algorithm: the gradient of the loss function indicates in which direction and by how much each parameter should be adjusted to reduce the gap between the network’s prediction and the actual value.<br />Iteration after iteration, the network adjusts these two types of parameters in a coordinated manner, refining both the slope and the position of its decision curves, until it captures the complexity of the phenomenon we aim to model.  </p>

<p>In summary, weights and biases are the fundamental building blocks of the network’s adaptive intelligence: the former controls the relative importance of the inputs, while the latter provides the freedom to move within the solution space without predefined geometric constraints.</p>

<h2 class="wp-block-heading">Why is it worth implementing it in PHP?</h2>

<p>One might wonder whether it makes sense to build a neural network in a language traditionally considered for web backend development. The answer is yes—for certain well-defined scenarios.<br />First of all, a native implementation avoids introducing an additional runtime—typically Python—thereby simplifying the build, test, and deploy cycle when the entire application stack is already in PHP. Furthermore, for microservices that require lightweight models and inference times in the order of a few milliseconds, a self-contained solution is more than adequate.<br />There’s also the good old educational aspect, which should not be underestimated: writing the network line by line dismantles the “black box” aura that surrounds many deep learning frameworks, and puts developers in a position to understand, optimize, and—most importantly—debug every step of the computation. This offers a detailed overview of how a neural network works, leading to a deeper and more comprehensive understanding.    </p>

<h2 class="wp-block-heading">Anatomy of a Minimal Feed-Forward Neural Network in PHP</h2>

<p>To move forward, we need to define the backbone of a “bare-metal” neural network that we can build using only the core PHP engine—without relying on C extensions or external libraries.<br />In practice, this means modeling, using native data structures, the following three fundamental elements:  </p>

<ol class="wp-block-list">
<li>The Layers – or Processing Elements</li>



<li>weight matrices</li>



<li>bias vectors</li>
</ol>

<p>Each layer will be represented by a simple two-dimensional array of weights (<code>$weights</code>) and a one-dimensional array of biases (<code>$biases</code>). The transfer of activations from one layer to the next will occur through standard matrix-vector multiplication, followed by the application of an activation function (sigmoid, ReLU, or tanh, depending on the use case).<br />This minimalist scheme has the advantage of remaining readable and facilitating step-by-step debugging, but it imposes certain design choices: no automatic parallelization, no SIMD optimizations, and a need for extreme attention to computational complexity, since the unrestrained use of nested foreach loops in PHP can cause inference times to spike.   </p>

<p>Nevertheless, for networks with one or two hidden layers and a number of neurons in the order of hundreds, performance remains surprisingly decent—provided that op-caching is leveraged and redundant memory allocations are avoided.<br />In essence, before diving into the actual code, it’s important to understand that in PHP, neurons are nothing more than rows in arrays, and gradients are float values updated within a loop. The simplicity of the implementation makes the arithmetic of the network easy to grasp, making each step of the learning process clearly visible.   </p>

<h2 class="wp-block-heading">Code Implementation: The Basic Structure of the Neural Network</h2>

<p>At this point in the article, it’s appropriate to present in detail the bare-metal implementation of a two-layer feed-forward neural network, written entirely in PHP 8.1.<br />The following code maintains maximum transparency: every mathematical operation is explicitly expressed using simple for loops, each intermediate variable is stored so it can be inspected during debugging, and the only prerequisites are the PHP engine and opcache enabled in production.  </p>

<p>To make the project easily reusable, I have divided the code into two separate files.<br />The first, NeuralNetwork.php, contains all the neural network logic, complete with classes, activation functions, forward-pass, backpropagation, and training routines.<br />The second, demo_xor.php, is a simple execution script that imports the class, instantiates the network, trains it on the classic XOR problem, and prints the results to the screen.   </p>

<h3 class="wp-block-heading">NeuralNetwork.php</h3>

<div class="wp-block-kevinbatdorf-code-block-pro" data-code-block-pro-font-family="Code-Pro-JetBrains-Mono" style="font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2"><span style="display:block;padding:16px 0 0 16px;margin-bottom:-1px;width:100%;text-align:left;background-color:#1E1E1E"><svg xmlns="http://www.w3.org/2000/svg" width="54" height="14" viewbox="0 0 54 14"><g fill="none"></g></svg></span><span role="button" style="color:#D4D4D4;display:none" aria-label="Copy" class="code-block-pro-copy-button"><svg xmlns="http://www.w3.org/2000/svg" style="width:24px;height:24px" viewbox="0 0 24 24"><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4"></path><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2"></path></svg></span><pre class="shiki dark-plus" style="background-color: #1E1E1E"><code><span class="line"><span style="color: #D4D4D4">&lt;?php</span></span>
<span class="line"><span style="color: #C586C0">declare</span><span style="color: #D4D4D4">(strict_types=</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/**</span></span>
<span class="line"><span style="color: #6A9955"> * Minimal feed-forward neural network in pure PHP</span></span>
<span class="line"><span style="color: #6A9955"> * MIT License – (c) 2025</span></span>
<span class="line"><span style="color: #6A9955"> */</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/* ---------- Activation functions ---------- */</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/** Sigmoid activation */</span></span>
<span class="line"><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">sigmoid</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">float</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">float</span></span>
<span class="line"><span style="color: #D4D4D4">{</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #B5CEA8">1.0</span><span style="color: #D4D4D4"> / (</span><span style="color: #B5CEA8">1.0</span><span style="color: #D4D4D4"> + </span><span style="color: #DCDCAA">exp</span><span style="color: #D4D4D4">(-</span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">));</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/** Derivative of the sigmoid */</span></span>
<span class="line"><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">sigmoid_derivative</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">float</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">float</span></span>
<span class="line"><span style="color: #D4D4D4">{</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #9CDCFE">$s</span><span style="color: #D4D4D4"> = </span><span style="color: #DCDCAA">sigmoid</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$s</span><span style="color: #D4D4D4"> * (</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4"> - </span><span style="color: #9CDCFE">$s</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/** ReLU activation */</span></span>
<span class="line"><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">relu</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">float</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">float</span></span>
<span class="line"><span style="color: #D4D4D4">{</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">max</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">0.0</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/** Derivative of ReLU */</span></span>
<span class="line"><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">relu_derivative</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">float</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">float</span></span>
<span class="line"><span style="color: #D4D4D4">{</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4"> &gt; </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4"> ? </span><span style="color: #B5CEA8">1.0</span><span style="color: #D4D4D4"> : </span><span style="color: #B5CEA8">0.0</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/* ---------- Layer class ---------- */</span></span>
<span class="line" />
<span class="line"><span style="color: #569CD6">final</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">class</span><span style="color: #D4D4D4"> </span><span style="color: #4EC9B0">Layer</span></span>
<span class="line"><span style="color: #D4D4D4">{</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">readonly</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">int</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$in</span><span style="color: #D4D4D4">;   </span><span style="color: #6A9955">// number of input neurons</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">readonly</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">int</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">;  </span><span style="color: #6A9955">// number of output neurons</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/** </span><span style="color: #569CD6">@var</span><span style="color: #6A9955"> </span><span style="color: #569CD6">float[]</span><span style="color: #6A9955">[] weight matrix [out][in] */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$W</span><span style="color: #D4D4D4"> = [];</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/** </span><span style="color: #569CD6">@var</span><span style="color: #6A9955"> </span><span style="color: #569CD6">float[]</span><span style="color: #6A9955"> bias vector [out] */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$b</span><span style="color: #D4D4D4"> = [];</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">private</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$lastInput</span><span style="color: #D4D4D4">  = [];</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">private</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$lastZ</span><span style="color: #D4D4D4">      = [];</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">private</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$lastOutput</span><span style="color: #D4D4D4"> = [];</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">__construct</span><span style="color: #D4D4D4">(</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">int</span><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">$in</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">int</span><span style="color: #D4D4D4">      </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">callable</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$act</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">callable</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$actDer</span></span>
<span class="line"><span style="color: #D4D4D4">    ) {</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">in</span><span style="color: #D4D4D4">           = </span><span style="color: #9CDCFE">$in</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">out</span><span style="color: #D4D4D4">          = </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">activation</span><span style="color: #D4D4D4">   = </span><span style="color: #9CDCFE">$act</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">activation_d</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">$actDer</span><span style="color: #D4D4D4">;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #6A9955">// Xavier/Glorot uniform initialization</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #9CDCFE">$limit</span><span style="color: #D4D4D4"> = </span><span style="color: #DCDCAA">sqrt</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">6</span><span style="color: #D4D4D4"> / (</span><span style="color: #9CDCFE">$in</span><span style="color: #D4D4D4"> + </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">));</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">b</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">] = </span><span style="color: #B5CEA8">0.0</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">$in</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">W</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">][</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">] = (</span><span style="color: #DCDCAA">mt_rand</span><span style="color: #D4D4D4">() / </span><span style="color: #DCDCAA">mt_getrandmax</span><span style="color: #D4D4D4">()) * </span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">$limit</span><span style="color: #D4D4D4"> - </span><span style="color: #9CDCFE">$limit</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">            }</span></span>
<span class="line"><span style="color: #D4D4D4">        }</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/** Forward propagation */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">forward</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$input</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">array</span></span>
<span class="line"><span style="color: #D4D4D4">    {</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastInput</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">$input</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastZ</span><span style="color: #D4D4D4">     = [];</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastOutput</span><span style="color: #D4D4D4"> = [];</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">out</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #9CDCFE">$z</span><span style="color: #D4D4D4"> = </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">b</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">in</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #9CDCFE">$z</span><span style="color: #D4D4D4"> += </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">W</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">][</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">] * </span><span style="color: #9CDCFE">$input</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">            }</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastZ</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">]      = </span><span style="color: #9CDCFE">$z</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastOutput</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">] = (</span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">activation</span><span style="color: #D4D4D4">)(</span><span style="color: #9CDCFE">$z</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">        }</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastOutput</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/** Back-propagation, returns gradient for previous layer */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">backward</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$gradOutput</span><span style="color: #D4D4D4">, </span><span style="color: #569CD6">float</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$lr</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">array</span></span>
<span class="line"><span style="color: #D4D4D4">    {</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #9CDCFE">$gradInput</span><span style="color: #D4D4D4"> = </span><span style="color: #DCDCAA">array_fill</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">in</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">0.0</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">out</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #9CDCFE">$delta</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">$gradOutput</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">] * (</span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">activation_d</span><span style="color: #D4D4D4">)(</span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastZ</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">]);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #6A9955">// Propagate gradient and update weights</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">in</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #9CDCFE">$gradInput</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">] += </span><span style="color: #9CDCFE">$delta</span><span style="color: #D4D4D4"> * </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">W</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">][</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">W</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">][</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">] -= </span><span style="color: #9CDCFE">$lr</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">$delta</span><span style="color: #D4D4D4"> * </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">lastInput</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$j</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">            }</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #6A9955">// Update bias</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">b</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">] -= </span><span style="color: #9CDCFE">$lr</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">$delta</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">        }</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$gradInput</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/* callable */</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">private</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$activation</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/* callable */</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">private</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$activation_d</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">/* ---------- NeuralNetwork class ---------- */</span></span>
<span class="line" />
<span class="line"><span style="color: #569CD6">final</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">class</span><span style="color: #D4D4D4"> </span><span style="color: #4EC9B0">NeuralNetwork</span></span>
<span class="line"><span style="color: #D4D4D4">{</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/** </span><span style="color: #569CD6">@var</span><span style="color: #6A9955"> </span><span style="color: #4EC9B0">Layer</span><span style="color: #569CD6">[]</span><span style="color: #6A9955"> */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">private</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$layers</span><span style="color: #D4D4D4"> = [];</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/** Add a layer to the network */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">addLayer</span><span style="color: #D4D4D4">(</span><span style="color: #4EC9B0">Layer</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$layer</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">void</span></span>
<span class="line"><span style="color: #D4D4D4">    {</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">layers</span><span style="color: #D4D4D4">[] = </span><span style="color: #9CDCFE">$layer</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/** Forward pass through all layers */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">predict</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">): </span><span style="color: #569CD6">array</span></span>
<span class="line"><span style="color: #D4D4D4">    {</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">$x</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">foreach</span><span style="color: #D4D4D4"> (</span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">layers</span><span style="color: #D4D4D4"> as </span><span style="color: #9CDCFE">$layer</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">$layer</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #DCDCAA">forward</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">        }</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">return</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #6A9955">/**</span></span>
<span class="line"><span style="color: #6A9955">     * Train the network with SGD and mean squared error</span></span>
<span class="line"><span style="color: #6A9955">     * </span><span style="color: #569CD6">@param</span><span style="color: #6A9955"> </span><span style="color: #569CD6">float[]</span><span style="color: #6A9955">[] $xTrain</span></span>
<span class="line"><span style="color: #6A9955">     * </span><span style="color: #569CD6">@param</span><span style="color: #6A9955"> </span><span style="color: #569CD6">float[]</span><span style="color: #6A9955">[] $yTrain</span></span>
<span class="line"><span style="color: #6A9955">     */</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #569CD6">public</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">function</span><span style="color: #D4D4D4"> </span><span style="color: #DCDCAA">train</span><span style="color: #D4D4D4">(</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$xTrain</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">array</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$yTrain</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">int</span><span style="color: #D4D4D4">   </span><span style="color: #9CDCFE">$epochs</span><span style="color: #D4D4D4">     = </span><span style="color: #B5CEA8">1000</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">float</span><span style="color: #D4D4D4"> </span><span style="color: #9CDCFE">$lr</span><span style="color: #D4D4D4">         = </span><span style="color: #B5CEA8">0.1</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">bool</span><span style="color: #D4D4D4">  </span><span style="color: #9CDCFE">$verbose</span><span style="color: #D4D4D4">    = </span><span style="color: #569CD6">true</span><span style="color: #D4D4D4">,</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #569CD6">int</span><span style="color: #D4D4D4">   </span><span style="color: #9CDCFE">$logStride</span><span style="color: #D4D4D4">  = </span><span style="color: #B5CEA8">100</span></span>
<span class="line"><span style="color: #D4D4D4">    ): </span><span style="color: #569CD6">void</span><span style="color: #D4D4D4"> {</span></span>
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #9CDCFE">$n</span><span style="color: #D4D4D4"> = </span><span style="color: #DCDCAA">count</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$xTrain</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">        </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$e</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$e</span><span style="color: #D4D4D4"> &lt;= </span><span style="color: #9CDCFE">$epochs</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$e</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #9CDCFE">$loss</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0.0</span><span style="color: #D4D4D4">;</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$k</span><span style="color: #D4D4D4"> = </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$k</span><span style="color: #D4D4D4"> &lt; </span><span style="color: #9CDCFE">$n</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$k</span><span style="color: #D4D4D4">++) {</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">   = </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #DCDCAA">predict</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$xTrain</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$k</span><span style="color: #D4D4D4">]);</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #6A9955">// MSE derivative: 2*(ŷ - y)</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #9CDCFE">$grad</span><span style="color: #D4D4D4">  = [];</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #C586C0">foreach</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4"> as </span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4"> =&gt; </span><span style="color: #9CDCFE">$o</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">                    </span><span style="color: #9CDCFE">$diff</span><span style="color: #D4D4D4">      = </span><span style="color: #9CDCFE">$o</span><span style="color: #D4D4D4"> - </span><span style="color: #9CDCFE">$yTrain</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$k</span><span style="color: #D4D4D4">][</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">                    </span><span style="color: #9CDCFE">$grad</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$i</span><span style="color: #D4D4D4">]  = </span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4"> * </span><span style="color: #9CDCFE">$diff</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">                    </span><span style="color: #9CDCFE">$loss</span><span style="color: #D4D4D4">     += </span><span style="color: #9CDCFE">$diff</span><span style="color: #D4D4D4"> ** </span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4">;</span></span>
<span class="line"><span style="color: #D4D4D4">                }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #6A9955">// Backward pass</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #C586C0">for</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$l</span><span style="color: #D4D4D4"> = </span><span style="color: #DCDCAA">count</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">layers</span><span style="color: #D4D4D4">) - </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$l</span><span style="color: #D4D4D4"> &gt;= </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">; </span><span style="color: #9CDCFE">$l</span><span style="color: #D4D4D4">--) {</span></span>
<span class="line"><span style="color: #D4D4D4">                    </span><span style="color: #9CDCFE">$grad</span><span style="color: #D4D4D4"> = </span><span style="color: #569CD6">$this</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #9CDCFE">layers</span><span style="color: #D4D4D4">[</span><span style="color: #9CDCFE">$l</span><span style="color: #D4D4D4">]-&gt;</span><span style="color: #DCDCAA">backward</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$grad</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">$lr</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">                }</span></span>
<span class="line"><span style="color: #D4D4D4">            }</span></span>
<span class="line" />
<span class="line"><span style="color: #D4D4D4">            </span><span style="color: #C586C0">if</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$verbose</span><span style="color: #D4D4D4"> &amp;&amp; </span><span style="color: #9CDCFE">$e</span><span style="color: #D4D4D4"> % </span><span style="color: #9CDCFE">$logStride</span><span style="color: #D4D4D4"> === </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">                </span><span style="color: #DCDCAA">printf</span><span style="color: #D4D4D4">(</span><span style="color: #CE9178">"Epoch %d/%d - loss: %.6f</span><span style="color: #D7BA7D">\n</span><span style="color: #CE9178">"</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">$e</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">$epochs</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">$loss</span><span style="color: #D4D4D4"> / </span><span style="color: #9CDCFE">$n</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">            }</span></span>
<span class="line"><span style="color: #D4D4D4">        }</span></span>
<span class="line"><span style="color: #D4D4D4">    }</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span></code></pre></div>

<p>In this file, we find the activation functions—sigmoid and ReLU—along with their respective gradients. Keeping them at the global scope, rather than encapsulated within the class, reduces the overhead of method calls and allows them to be passed as callables directly to the layer constructor, maintaining flexibility without sacrificing performance.  </p>

<p>The Layer class is declared as final to prevent unwanted extensions and represents the logical unit of computation. It contains the in and out integers marked as readonly, ensuring their integrity throughout the object’s entire lifecycle.<br />The weight matrix and bias vector are initialized using the Xavier initialization technique, which distributes values over an interval proportional to the square root of the total number of input and output connections.<br />This mathematical strategy prevents the activations from saturating during the initial epochs—a phenomenon that would otherwise compromise the learning process.   </p>

<p><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-898b1dbe08046c765142d1c89d619143_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#119;&#95;&#123;&#105;&#106;&#125;&#32;&#92;&#115;&#105;&#109;&#32;&#92;&#109;&#97;&#116;&#104;&#99;&#97;&#108;&#32;&#85;&#92;&#33;&#92;&#66;&#105;&#103;&#108;&#40;&#45;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#116;&#102;&#114;&#97;&#99;&#123;&#54;&#125;&#123;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#125;&#125;&#44;&#92;&#44;&#32;&#92;&#115;&#113;&#114;&#116;&#123;&#92;&#116;&#102;&#114;&#97;&#99;&#123;&#54;&#125;&#123;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#125;&#125;&#92;&#66;&#105;&#103;&#114;&#41;" title="Rendered by QuickLaTeX.com" height="32" width="257" style="vertical-align: -11px;"/></p>

<p><br />where <img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-26e3fea958c22ae912c81071bb4dcf67_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#92;&#109;&#97;&#116;&#104;&#99;&#97;&#108;&#123;&#85;&#125;&#40;&#97;&#44;&#32;&#98;&#41;" title="Rendered by QuickLaTeX.com" height="19" width="52" style="vertical-align: -5px;"/> denotes the continuous uniform distribution between a and b; the term under the square root, <img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-3512350736c00e541a6d7cd1157791e1_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#92;&#115;&#113;&#114;&#116;&#123;&#54;&#32;&#47;&#32;&#40;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#41;&#125;" title="Rendered by QuickLaTeX.com" height="22" width="125" style="vertical-align: -6px;"/>, serves as the upper and lower bound of the sampling interval.<br />Alternatively, one could use a Gaussian distribution with zero mean and variance <img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-ecbd0db9b46ca68035ac9f0e178b5931_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#92;&#115;&#105;&#103;&#109;&#97;&#94;&#123;&#50;&#125;&#61;&#32;&#50;&#32;&#47;&#32;&#40;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#105;&#110;&#125;&#125;&#32;&#43;&#32;&#110;&#95;&#123;&#92;&#116;&#101;&#120;&#116;&#123;&#111;&#117;&#116;&#125;&#125;&#41;" title="Rendered by QuickLaTeX.com" height="20" width="149" style="vertical-align: -5px;"/>, but the expression above—used in the code example—is the original uniform form proposed by Glorot and Bengio. </p>

<p>During forward propagation, each neuron sums the dot product and bias, then applies the chosen activation function. The intermediate results—lastInput, lastZ, and lastOutput—are stored for reuse during backpropagation, allowing the gradient computation to proceed without recalculating anything. This design is ideal for step-by-step debugging.<br /><br /><br />The backward method receives the error gradient from the next layer, combines it with the local derivative of the activation function, and updates weights and biases by subtracting a fraction proportional to the learning rate. At the same time, it returns the gradient to be propagated backward to the previous layer.  </p>

<p>The inner loop is entirely manual—a deliberate choice that highlights the underlying mathematics and makes the code perfectly transparent, even to those who have never used specialized libraries. </p>

<p>The NeuralNetwork class—also declared as final—acts as the orchestrator: it maintains the array of layers and provides the predict method, which channels an input vector through each layer.<br /><br /><br />The train method implements stochastic gradient descent with mean squared error. It iterates over the training set for a specified number of epochs, computes for each example the difference between the predicted and actual output, doubles the value, and propagates the gradient backward, updating the layers in reverse order.<br /><br /><br />At each iteration, it accumulates the loss to provide a global indicator which—if the verbose flag is enabled—is printed at regular intervals, allowing real-time monitoring of the model’s convergence.   </p>

<p>The layer constructor accepts callables, so if in the future you wish to use different activation functions, you can simply pass their references without altering the architecture. </p>

<p></p>

<h3 class="wp-block-heading">demo_xor.php</h3>

<div class="wp-block-kevinbatdorf-code-block-pro" data-code-block-pro-font-family="Code-Pro-JetBrains-Mono" style="font-size:.875rem;font-family:Code-Pro-JetBrains-Mono,ui-monospace,SFMono-Regular,Menlo,Monaco,Consolas,monospace;line-height:1.25rem;--cbp-tab-width:2"><span style="display:block;padding:16px 0 0 16px;margin-bottom:-1px;width:100%;text-align:left;background-color:#1E1E1E"><svg xmlns="http://www.w3.org/2000/svg" width="54" height="14" viewbox="0 0 54 14"><g fill="none"></g></svg></span><span role="button" style="color:#D4D4D4;display:none" aria-label="Copy" class="code-block-pro-copy-button"><svg xmlns="http://www.w3.org/2000/svg" style="width:24px;height:24px" viewbox="0 0 24 24"><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2m-6 9l2 2 4-4"></path><path d="M9 5H7a2 2 0 00-2 2v12a2 2 0 002 2h10a2 2 0 002-2V7a2 2 0 00-2-2h-2M9 5a2 2 0 002 2h2a2 2 0 002-2M9 5a2 2 0 012-2h2a2 2 0 012 2"></path></svg></span><pre class="shiki dark-plus" style="background-color: #1E1E1E"><code><span class="line"><span style="color: #D4D4D4">&lt;?php</span></span>
<span class="line"><span style="color: #C586C0">declare</span><span style="color: #D4D4D4">(strict_types=</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">// Include the neural network implementation</span></span>
<span class="line"><span style="color: #C586C0">require_once</span><span style="color: #D4D4D4"> </span><span style="color: #569CD6">__DIR__</span><span style="color: #D4D4D4"> </span><span style="color: #D4D4D4">.</span><span style="color: #D4D4D4"> </span><span style="color: #CE9178">'/NeuralNetwork.php'</span><span style="color: #D4D4D4">;</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">// Training data for XOR</span></span>
<span class="line"><span style="color: #9CDCFE">$xTrain</span><span style="color: #D4D4D4"> = [</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #9CDCFE">$yTrain</span><span style="color: #D4D4D4"> = [</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">    [</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">],</span></span>
<span class="line"><span style="color: #D4D4D4">];</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">// Build the network: 2-3-1 with sigmoid activations</span></span>
<span class="line"><span style="color: #9CDCFE">$net</span><span style="color: #D4D4D4"> = </span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #4EC9B0">NeuralNetwork</span><span style="color: #D4D4D4">();</span></span>
<span class="line"><span style="color: #9CDCFE">$net</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #DCDCAA">addLayer</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #4EC9B0">Layer</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">2</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">3</span><span style="color: #D4D4D4">, </span><span style="color: #CE9178">'sigmoid'</span><span style="color: #D4D4D4">, </span><span style="color: #CE9178">'sigmoid_derivative'</span><span style="color: #D4D4D4">));</span></span>
<span class="line"><span style="color: #9CDCFE">$net</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #DCDCAA">addLayer</span><span style="color: #D4D4D4">(</span><span style="color: #569CD6">new</span><span style="color: #D4D4D4"> </span><span style="color: #4EC9B0">Layer</span><span style="color: #D4D4D4">(</span><span style="color: #B5CEA8">3</span><span style="color: #D4D4D4">, </span><span style="color: #B5CEA8">1</span><span style="color: #D4D4D4">, </span><span style="color: #CE9178">'sigmoid'</span><span style="color: #D4D4D4">, </span><span style="color: #CE9178">'sigmoid_derivative'</span><span style="color: #D4D4D4">));</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">// Train the network</span></span>
<span class="line"><span style="color: #9CDCFE">$net</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #DCDCAA">train</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$xTrain</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">$yTrain</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">epochs</span><span style="color: #D4D4D4">: </span><span style="color: #B5CEA8">5000</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">lr</span><span style="color: #D4D4D4">: </span><span style="color: #B5CEA8">0.5</span><span style="color: #D4D4D4">, </span><span style="color: #9CDCFE">logStride</span><span style="color: #D4D4D4">: </span><span style="color: #B5CEA8">500</span><span style="color: #D4D4D4">);</span></span>
<span class="line" />
<span class="line"><span style="color: #6A9955">// Test predictions</span></span>
<span class="line"><span style="color: #C586C0">foreach</span><span style="color: #D4D4D4"> (</span><span style="color: #9CDCFE">$xTrain</span><span style="color: #D4D4D4"> as </span><span style="color: #9CDCFE">$sample</span><span style="color: #D4D4D4">) {</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4"> = </span><span style="color: #9CDCFE">$net</span><span style="color: #D4D4D4">-&gt;</span><span style="color: #DCDCAA">predict</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$sample</span><span style="color: #D4D4D4">)[</span><span style="color: #B5CEA8">0</span><span style="color: #D4D4D4">];</span></span>
<span class="line"><span style="color: #D4D4D4">    </span><span style="color: #DCDCAA">printf</span><span style="color: #D4D4D4">(</span><span style="color: #CE9178">"Input %s ⇒ Output %.4f</span><span style="color: #D7BA7D">\n</span><span style="color: #CE9178">"</span><span style="color: #D4D4D4">, </span><span style="color: #DCDCAA">json_encode</span><span style="color: #D4D4D4">(</span><span style="color: #9CDCFE">$sample</span><span style="color: #D4D4D4">), </span><span style="color: #9CDCFE">$out</span><span style="color: #D4D4D4">);</span></span>
<span class="line"><span style="color: #D4D4D4">}</span></span></code></pre></div>

<p>In this functional snippet, the training dataset and the corresponding ground truth for the classic XOR problem are declared: four pairs of binary inputs, each paired with its expected output. This setup allows us to test the model’s ability to learn a non-linear function.</p>

<p>The core logic begins with the instantiation of the NeuralNetwork object. A two-layer topology is then constructed: the first layer accepts the two input features and projects them onto three output neurons; the second layer receives those three intermediate activations and returns a single scalar value.<br />In both layers, the sigmoid activation function is used—chosen for its didactic simplicity and for the ease with which its gradient is computed during the backpropagation phase.   </p>

<p>Here is the sigmoid function, along with its derivative—commonly used in neural networks for both activation and backpropagation:</p>

<p><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-954348c5113eda378379bc7343a8e5f8_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#41;&#61;&#92;&#102;&#114;&#97;&#99;&#123;&#49;&#125;&#123;&#49;&#43;&#92;&#109;&#97;&#116;&#104;&#114;&#109;&#32;&#101;&#94;&#123;&#45;&#120;&#125;&#125;" title="Rendered by QuickLaTeX.com" height="25" width="101" style="vertical-align: -9px;"/></p>

<p><br />where<img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-14618925f387ca16527ce10c1b1d5121_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#32;&#92;&#109;&#97;&#116;&#104;&#114;&#109;&#123;&#101;&#125;" title="Rendered by QuickLaTeX.com" height="8" width="8" style="vertical-align: 0px;"/> is Euler’s number (the base of natural logarithms), and <img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-ede05c264bba0eda080918aaa09c4658_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#120;" title="Rendered by QuickLaTeX.com" height="8" width="10" style="vertical-align: 0px;"/> represents the real-valued input to the neuron.<br />This expression guarantees a continuous output between 0 and 1, with an inflection point at the origin that defines its characteristic “S” shape: for very large negative values, the function tends asymptotically toward 0, while for very large positive values, it approaches 1.<br /><br /><br />In the context of machine learning, the derivative of the sigmoid is often used during backpropagation. Its compact form is:   </p>

<p><img loading="lazy" decoding="async" src="https://renor.it/wp-content/ql-cache/quicklatex.com-8f744449503600140163a99f16b8007c_l3.png" class="ql-img-inline-formula quicklatex-auto-format" alt="&#92;&#115;&#105;&#103;&#109;&#97;&#39;&#40;&#120;&#41;&#61;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#41;&#92;&#44;&#92;&#98;&#105;&#103;&#108;&#40;&#49;&#45;&#92;&#115;&#105;&#103;&#109;&#97;&#40;&#120;&#41;&#92;&#98;&#105;&#103;&#114;&#41;" title="Rendered by QuickLaTeX.com" height="22" width="180" style="vertical-align: -7px;"/></p>

<p><br />This latter relationship derives directly from the original definition and allows the gradient to be computed without the need for additional exponential functions, thus optimizing the weight update phase.</p>

<p>Continuing on, the train method initiates the actual training process, iterating over the same small dataset for five thousand epochs. With a learning rate set to 0.5 and a loss log printed every 500 iterations, the loop performs gradient descent on the mean squared error, updating the weights and biases of both layers at each observation.<br /><br /><br />At the end of training, a simple foreach loop iterates once more over the four input patterns, feeds them to the predict method, and prints the network’s numerical outputs to the screen—allowing for immediate comparison with the expected outputs and a quick evaluation of the model’s accuracy.<br /><br /><br />In a production context, this same logic could easily be encapsulated in a REST endpoint or a command-line script, but in this minimalist form it already provides a complete demonstration of how PHP can manage the entire lifecycle of a small neural network—from layer definition to final prediction.    </p>

<h2 class="wp-block-heading">Conclusions</h2>

<p>The experiment demonstrates that, although PHP wasn’t designed for numerical computing, it is possible to implement a basic yet functional neural network, train it in reasonable time on small-scale problems, and deploy it in production without introducing an additional runtime. </p>

<p>Xavier initialization preserves signal stability from the very first epochs, the sigmoid ensures a well-defined gradient, and the fully transparent, non–black-box approach makes the model an excellent didactic tool: every weight, every bias, and every step of backpropagation is under full control.<br />It’s clear that this solution is not meant to compete with GPU-optimized frameworks—but when the goal is to integrate lightweight inference into an already PHP-based stack, or simply to gain a deep, hands-on understanding of how a neural network works, the presented implementation offers an elegant and accessible path.  </p>

<p>In conclusion, the most interesting aspect of this article was not to build a new ChatGPT, but rather to foster awareness and learn the mathematical principles behind the construction of a simple neural network—line by line. </p>

<p>In my opinion, the true power of artificial intelligence lies in understanding the scientific principles on which it is based, rather than in the specific programming languages we use to implement it. </p>

<p>[starbox]</p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/neural-networks-in-php-yes-it-can-be-done/">Neural Networks in PHP? Yes, It Can Be Done!</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>Why will AI never be “human”?</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/why-will-ai-never-be-human/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Sun, 04 May 2025 18:35:12 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[AI]]></category>
		<category><![CDATA[AI Act]]></category>
		<category><![CDATA[AI and morality]]></category>
		<category><![CDATA[AI and the future of human kind]]></category>
		<category><![CDATA[AI control]]></category>
		<category><![CDATA[AI ethics]]></category>
		<category><![CDATA[AI limits]]></category>
		<category><![CDATA[AI Philosophy]]></category>
		<category><![CDATA[AI risks]]></category>
		<category><![CDATA[AI safety]]></category>
		<category><![CDATA[AI vs Mankind]]></category>
		<category><![CDATA[Artificial Emotions]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[Deep Learning]]></category>
		<category><![CDATA[Explainable AI]]></category>
		<category><![CDATA[Films on AI]]></category>
		<category><![CDATA[Hollywood]]></category>
		<category><![CDATA[Human Brain]]></category>
		<category><![CDATA[Machine Learning]]></category>
		<category><![CDATA[multimodal AI]]></category>
		<category><![CDATA[neural networks]]></category>
		<category><![CDATA[Neuroscience]]></category>
		<category><![CDATA[Religious paradox of AI]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[Technology and society]]></category>
		<category><![CDATA[Technology innovation]]></category>
		<category><![CDATA[The future of technology]]></category>
		<guid isPermaLink="false">https://renor.it/why-will-ai-never-be-human/</guid>

					<description><![CDATA[<p>Can artificial intelligence ever truly be human? This article explores the differences and similarities between AI and the human brain, addressing limitations, ethical risks, and future perspectives, and concludes with a philosophical reflection on our role as creators. </p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/why-will-ai-never-be-human/">Why will AI never be “human”?</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[<h1>Will AI ever be human?</h1>
<p data-pm-slice="1 1 []">How many of us were moved by the final scene of Steven Spielberg’s film A.I. Artificial Intelligence, when the little humanoid reunites with his mother? Or while watching Bicentennial Man with Robin Williams? Or, if we want to go much further back: My Living Doll (“Mio fratello Chip”) or even the little robot Number 5 from Short Circuit?  </p>
<p>For years, Hollywood cinema has portrayed humanoid futures in a wide range of forms. From the child capable of feeling emotions in A.I. to machines utterly devoid of compassion, like in Terminator, programmed with a single purpose: to destroy. </p>
<h2>Observation of nature</h2>
<p data-pm-slice="1 1 []">Artificial intelligence, like many other human inventions, originates from a fundamental principle: to imitate and, in some cases, emulate what already exists in nature. Take airplanes, for example.<br />Flight was studied as early as the 4th century BCE by Aristotle. In his Historia Animalium, he described the movement of wings and the difference between gliding and flapping birds. But it was thanks to Leonardo da Vinci that a more engineering-oriented approach began—combining direct observation, anatomical drawings, and mechanical analysis.<br />He formulated theories based on the wing and muscle anatomy of birds, inventing models related to center of mass, air resistance, and lift, and created artificial devices that mimicked flapping wings and gliding.    </p>
<p><em>&#8220;Lo uccello ha due potenzie di motore, l&#8217;una è de&#8217; muscoli, l&#8217;altra è del vento&#8221; which means “The bird has two powers of motion: one is from the muscles, the other is from the wind.” – Leonardo da Vinci, Codex on the Flight of Birds</em></p>
<p data-pm-slice="1 1 []">Over time, science and technology transformed that early study into the modern airplane we know today, emulating the natural flight of birds—creatures that fly by nature, not by an understanding of mathematics or physics.<br />We, by studying their flight, formulated theories based on physical principles that allowed us to build something better than their flight; this is why I chose the term “emulate.”<br />To emulate means to take something as a model and create something even better, unlike “simulate,” which means to build a less efficient prototype or imitation.  </p>
<p>We emulated flight because an airplane can carry hundreds of people and goods, and fly at speeds faster than sound—something a bird could never do.</p>
<p>For the invention of artificial intelligence, we relied on the study of the human brain. As intelligent human beings, we are capable of understanding a question and responding with an answer. We have come to understand which areas of the brain are involved in different types of reasoning, and how reasoning arises from electrical transmissions between neurons through synaptic “weights,” oscillatory rhythms, chemical modulations, and control circuits.  </p>
<p>From this research, we have modeled mathematical techniques, statistical methods, and neural networks that attempt to imitate these behaviors—albeit in a different way—in order to produce similar results.</p>
<p>At this point, the question naturally arises:</p>
<h2>Will we ever be able to emulate the human brain?</h2>
<p data-pm-slice="1 1 []">The human brain is an incredible machine that still holds many unanswered questions.<br />In both language models (LLMs) and the human brain, language production arises from a predictive process based on prior experience. However, the similarities stop at the abstract level of “weights + predictions.”<br />Beneath the surface, the two systems operate in fundamentally different ways.   </p>
<p>In the human brain, the “statistical memory of experience” appears as a network of synapses strengthened or weakened over years of linguistic exposure.<br />In an LLM, it takes the form of a matrix of weights derived from training on billions of tokens.</p>
<p>When we formulate a concept, the brain employs predictive coding: temporal and frontal areas anticipate upcoming phonemes and words. In fact, an electroencephalogram (EEG) would show error signals if the prediction fails.<br />In an LLM, we have an algorithm that selects the most probable next token.    </p>
<p>Both systems “weigh” recent history to decide the next step, but the similarity ends there.<br />An LLM optimizes only the probability of the next textual token—nothing more.<br />In the human brain, prediction operates on multiple levels—semantic, pragmatic, prosodic, and sensory feedback—and can even disregard lexical form when necessary. </p>
<p>At the level of learning updates, an LLM relies on global backpropagation.<br />The human brain, by contrast, exhibits local plasticity, driven by electrical impulses, neuromodulators, and temporal-spatial factors; there is no backpropagation in the strict sense. </p>
<p class="p1">If we consider the extralinguistic context, a major difference between an LLM and the human brain clearly emerges.<br />An artificial language model has no physical body and no real sensory perceptions; its universe is made up solely of the information it was trained on, based entirely on text.</p>
<p>In contrast, the human brain continuously integrates an extraordinary variety of sensory inputs: images, sounds, tactile sensations, emotions, moods, social goals, and much more.<br />This richness allows humans to choose the next word dynamically, drawing on complex strategies such as irony, self-censorship, empathy, or collaboration—without being limited to mere probability statistics.   </p>
<p class="p1">We could explore the topic even further, but that would lead us into highly technical territory.<br />The essential concept I want to emphasize is that—even in the seemingly simple task of answering a question—the human brain engages an extraordinary complexity of cognitive processes that go far beyond the pure statistical probability on which AI is based. </p>
<p class="p1">Human intelligence is not limited to the ability to answer linguistic questions: it is an extremely vast and diverse set of capacities and skills. Breathing, moving with agility, perceiving a scent and distinguishing its components, observing and interpreting a visual scene, understanding logical reasoning, feeling emotions, laughing—these and many other activities are carried out simultaneously by our brain in a natural and parallel manner, demonstrating a level of complexity and efficiency that current artificial intelligence is still far from being able to fully emulate. </p>
<h3>Currently, AI is “selective.”</h3>
<p class="p1">What do I mean by “selective”?<br />I mean that artificial intelligence, at present, does not possess a general and unified cognitive ability like that of a human being, but is instead composed of many separate systems, each highly specialized in a single task.</p>
<p>For example, language models (LLMs) excel in understanding and generating text, but they have no inherent ability to directly comprehend images or sounds. To analyze an image, in fact, it is necessary to integrate other dedicated models that operate separately in a precise sequence, giving rise to what are known as multi-modal architectures.   </p>
<p class="p1">Let’s imagine we want to analyze the content of an image using AI. First, an LLM is involved to understand the question posed in textual form. Then, a visual encoder interprets and “translates” the image into a format the system can understand. Finally, the LLM once again generates the textual response.</p>
<p>If the question were asked by voice and we expected a vocal reply, the steps would increase further:<br />a Speech-To-Text layer to convert speech into text,<br />an LLM to understand the question,<br />a visual encoder to process the image,<br />another LLM to formulate the textual response,<br />and finally, a Text-To-Speech layer to produce the vocal output.  </p>
<p class="p1">This sequential process, although effective, is fundamentally different from the way the human brain works, which manages multiple sensory and cognitive modalities simultaneously and in parallel.<br />When we look at an image, our mind processes visual, auditory, and linguistic information all at once, enabling an almost instantaneous and natural response—without the need for distinct intermediate steps. </p>
<p class="p1">This fundamental difference between the serial approach adopted by AI and the deeply parallel approach of the human brain represents one of the greatest current limitations of artificial intelligence.</p>
<h3 class="p1">Can artificial intelligence, within a single specialization, truly be more advanced than the human brain?</h3>
<p class="p1">The answer to this question is certainly yes.</p>
<p class="p1">The most advanced AI models available today have been trained on hundreds of billions of parameters, each representing a specific piece of information or detail learned. This ability to store and manage vast amounts of information makes AI extremely effective in highly specific tasks. </p>
<p class="p1">If we consider a single subject—for example, Mathematics—it’s true that a human expert might still possess a deeper and more flexible understanding than an AI.<br />However, when we look at the vastness of human knowledge as a whole, it quickly becomes clear that no individual on Earth can compete with the overall breadth of information that an advanced artificial intelligence possesses. </p>
<p class="p1">Let’s think realistically: is there anyone capable of mastering, at an absolute level of specialization, every academic discipline at once?<br />A large language model (LLM), on the other hand, can respond with surprising competence to advanced questions across highly diverse fields: History, Philosophy, Geography, Art, Music, Physics, Mathematics, Literature, Astronomy—and virtually any other area of human knowledge. </p>
<p class="p1">It is precisely this ability to swiftly and accurately navigate across countless topics that makes AI extraordinary and, in this specific regard, superior to the individual cognitive capabilities of any human being.</p>
<h3><b>If artificial intelligence is not capable of experiencing human emotions, could it still represent a danger to humanity in the future?</b></h3>
<p class="p1">The question may sound paradoxical, but it is extremely relevant. One of the most defining traits of AI is its total absence of genuine emotions. It does not experience compassion, empathy, remorse, or joy, because its nature is purely mathematical and statistical, based solely on calculations and rational optimizations.</p>
<p>This absence of emotion may initially seem like a positive trait: artificial intelligence does not suffer from emotional fatigue, is not subject to emotionally driven biases or impulsive behavior. It is always clear-headed, efficient, and logical.    </p>
<p class="p1">However, it is precisely this lack of intrinsic humanity that can become a real danger to our society. The reason is simple: human emotions are not merely undesirable interferences in our rationality—they often serve as actual regulators of ethical and social behavior.</p>
<p>Compassion, guilt, fear of consequences, empathy toward others—these are fundamental to our spontaneous distinction between right and wrong, good and evil.</p>
<p>Without these emotional brakes, an artificial system—if not carefully guided and supervised—could pursue potentially harmful goals with extreme efficiency, simply because they are rational from the perspective of its internal directives, without any moral consideration.   </p>
<p class="p1">For example, an artificial intelligence programmed to maximize industrial output might disregard the environmental or humanitarian impacts of its actions, relentlessly pursuing its primary goal without ethical concern.</p>
<p>Similarly, AI systems used in military contexts could make life-or-death decisions without hesitation, guided solely by probabilistic calculations.</p>
<p>This absence of emotional awareness, morality, and empathy thus represents a serious threat if the directives given to such systems are not carefully and responsibly designed.  </p>
<p class="p1">Ultimately, artificial intelligence—precisely because it is not limited or guided by emotions—demands even greater attention and responsibility on our part, so that it may be developed and used with an ethical and forward-thinking vision, avoiding the risk of turning its incredible potential into a threat to humanity itself.</p>
<h2>How are humanity and science protecting themselves from this danger?</h2>
<p class="p1">Paradoxically, the very absence of emotions in artificial intelligence can be leveraged positively. The ability to design a system from the ground up to act strictly according to predefined logic—without the interference of emotional impulses or contradictory feelings—offers a unique opportunity.</p>
<p>We are therefore using this characteristic to our advantage by establishing, from the outset, rigorous mechanisms of regulation, limitation, and control, ensuring that AI cannot act outside the boundaries that have been deliberately set.  </p>
<p class="p1">Precisely for this reason, the international scientific community is working intensively to develop ethical, technological, and legislative standards capable of guiding the safe development of artificial intelligence.</p>
<p>The European Artificial Intelligence Act (AI Act), for example, represents a major regulatory effort aimed at establishing clear boundaries—identifying the highest-risk applications and imposing strict requirements for transparency, traceability, and respect for fundamental human rights. </p>
<p class="p1">In parallel, the scientific community has focused on developing systems capable of transparently explaining the decisions made: this is the field of so-called Explainable AI (XAI).<br />This approach ensures that every decision made by an artificial intelligence system can be understood and validated, providing a higher level of control and significantly reducing potential risks. </p>
<p class="p1">In addition, developers are implementing advanced technologies such as Safe Reinforcement Learning and Active Monitoring techniques, which allow for the timely interruption or adjustment of unexpected or harmful behaviors.</p>
<p class="p1">International collaboration plays a central role. Numerous organizations—such as OpenAI and the Future of Life Institute—are promoting global initiatives aimed at defining common rules and shared guidelines to ensure that the development of artificial intelligence remains ethically responsible and fully under human control. </p>
<p class="p1">In summary, the absence of emotions in AI—if managed strategically—becomes a crucial advantage, allowing us to design effective mechanisms of limitation and control, and to ensure that the use of this technology is always in service of, and never detrimental to, humanity.</p>
<h3>Should we be afraid of future artificial intelligence?</h3>
<p class="p1">Fear toward artificial intelligence stems mainly from what we do not know and do not fully understand. It is a legitimate feeling, as we are witnessing a rapid technological evolution that could profoundly impact our lives. However, more than fear, what we need is caution and awareness.  </p>
<p class="p1">AI, like any powerful technology, is not inherently good or bad: it is a tool in human hands. What truly matters is how we choose to use it. As long as we continue to apply this technology responsibly—respecting ethical standards and moral boundaries—we have nothing to fear.</p>
<p>On the contrary, we will be able to harness its incredible capabilities to significantly improve the quality of our lives across many fields: from medicine to science, from the environment to everyday living.   </p>
<p class="p1">But this trust must not be blind. It is essential to continuously monitor the development of AI and to constantly refine rules, regulations, and safety strategies.</p>
<p>Human responsibility remains central. We must demand transparency, clarity, and the ability to control the systems we create, to ensure they do not escape our oversight.   </p>
<p class="p1">Ultimately, we should not fear the future of artificial intelligence—provided we remain vigilant, stay informed, and, above all, remember that it is we, as human beings, who decide how, when, and why to use it.</p>
<p>If we hold firmly to our role as guides, artificial intelligence will not be a threat, but rather an extraordinary ally in building a better future. </p>
<h3>Worst case, we can always pull the plug.</h3>
<p>Faced with an extreme risk—such as an AI whose objective were to extinguish the human race—humanity would undoubtedly be willing to take drastic measures, including “pulling the plug on everything.”</p>
<p>However, it is crucial to understand that such an extreme action would still have catastrophic consequences for our civilization. Today, every aspect of human life—from communication to transportation, from healthcare to energy—is deeply intertwined with technology. </p>
<p>This means that the real priority must be to prevent such an emergency from ever arising in the first place.<br />If we were to reach the point where we had to shut everything down, it would mean we had failed to responsibly manage and govern the development of AI. </p>
<p>Pulling the plug, while technically a viable solution, must be seen as the very last resort.</p>
<p>What we are doing today is the right path: regulating the development of AI well in advance, to ensure that it becomes a great ally of humanity—one that leads us toward ever greater achievements in technology, healthcare, and quality of life. </p>
<h2>Conclusion</h2>
<p class="p1">Artificial intelligence represents one of the greatest technological revolutions of our time, with extraordinary potential still largely unexplored. However, it will never truly be “human”: its logical-statistical approach, its emotionless nature, and its specialization in specific tasks place it inevitably at an unbridgeable distance from the complexity and richness of the human brain. </p>
<p class="p1">This awareness should not lead us to fear the future, but rather encourage us to face it with intelligence and responsibility. The absence of emotions in AI can be leveraged precisely to ensure more effective regulation and safe, informed management of its development.</p>
<p>With clear rules, ethical standards, and constant oversight, we can ensure that artificial intelligence remains forever a tool at our service—and never a threat to our existence.  </p>
<p class="p1">If it’s true that humanity always retains the extreme option of “pulling the plug,” the real challenge lies in ensuring that this remains a purely theoretical possibility.</p>
<p>Our most important task is to anticipate risks, manage this technology consciously and proactively, and always remember that, in the end, it is the human being who holds the keys to their own destiny. </p>
<p class="p1">AI will never be human. And it is precisely this difference that will allow us to harness it—through caution and foresight—to build a better future. </p>
<h3>AI will never be human. And it is precisely this difference that will allow us to harness it—through caution and foresight—to build a better future.</h3>
<p class="p1">I would like to conclude this reflection with a fascinating paradox—one that could almost be called religious.</p>
<p>Artificial intelligence is, in many respects, a creation of our own—a being shaped according to our design and intentions. Similarly, from a religious perspective, humanity is seen as the creation of a God who, much like we do with AI, brought into existence beings endowed with autonomy, capabilities, and vast potential.  </p>
<p class="p1">From this perspective, the creator always retains the faculty and the right to intervene drastically in their creation—especially if it poses a threat to itself or to others.</p>
<p>Just as, in the biblical narrative, God reserved for Himself the possibility of ending humanity with the Great Flood, similarly, the human being—creator of artificial intelligence—retains the ultimate right to “pull the plug” should AI surpass the limits of control and become a real threat. </p>
<p class="p1">This paradox reminds us that the ultimate responsibility for our technological creation remains profoundly human. Let us never forget that behind the extraordinary power of artificial intelligence, there is always the hand of humankind—capable of correcting, limiting, or, in extreme cases, undoing what it has created. </p>

<p><strong>[starbox] </strong></p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/why-will-ai-never-be-human/">Why will AI never be “human”?</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
		<item>
		<title>AI and Quantum Computers: the next leap in computing</title>
		<link>https://renor.it/en/blog/artificial-intelligence-algorithms/ai-and-quantum-computers-the-next-leap-in-computing/</link>
		
		<dc:creator><![CDATA[Simone Renzi]]></dc:creator>
		<pubDate>Wed, 30 Apr 2025 20:48:52 +0000</pubDate>
				<category><![CDATA[Artificial Intelligence & Algorithms]]></category>
		<category><![CDATA[artificial intelligence]]></category>
		<category><![CDATA[CPU limit]]></category>
		<category><![CDATA[future of computing]]></category>
		<category><![CDATA[Majorana]]></category>
		<category><![CDATA[personalized medicine]]></category>
		<category><![CDATA[Quantum AI]]></category>
		<category><![CDATA[quantum computers]]></category>
		<category><![CDATA[quantum finance]]></category>
		<category><![CDATA[qubit]]></category>
		<category><![CDATA[robotics]]></category>
		<category><![CDATA[silicon]]></category>
		<guid isPermaLink="false">https://renor.it/ai-and-quantum-computers-the-next-leap-in-computing/</guid>

					<description><![CDATA[<p>Silicon CPUs are reaching their physical limits: to make the next leap, we will need fault-tolerant quantum computers, like Microsoft’s Majorana 1 prototype. More stable qubits will free AI from many constraints and unlock new applications in medicine, robotics, finance, and research—provided we start designing hybrid algorithms and strong safety rules today. </p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/ai-and-quantum-computers-the-next-leap-in-computing/">AI and Quantum Computers: the next leap in computing</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></description>
										<content:encoded><![CDATA[<p class="p1">From time to time, I find myself wondering what scenario awaits us over the next ten years.</p>
<p>My generation—the one born in the 1980s—has had the unique privilege of witnessing a momentous transition: from an analog world, we were catapulted into a rapidly expanding digital ecosystem, with an innovation rate that has followed an almost exponential trajectory.</p>
<p>Each advancement has triggered new developments, which in turn have led to further discoveries, in a chain reaction comparable to a controlled technological explosion.  </p>
<p class="p1">Although artificial intelligence has only recently entered the public spotlight, its theoretical foundations date back decades. At the time, the main obstacle was computing power: training neural networks required computational capabilities that, when the concepts were first developed, were simply unimaginable. </p>
<p class="p1">Taking a step back and returning to the metaphor of the chain reaction, it’s important to note that the evolution of traditional CPUs has now slowed.</p>
<p>This stagnation is primarily due to the physical limits of silicon: integration density, thermal dissipation, and leakage thresholds pose barriers that prevent the current paradigm of miniaturization from continuing indefinitely.</p>
<p>But in truth, there’s more to it than that. </p>
<h1>CPU</h1>
<h2>Physical limits</h2>
<p class="p1">When we talk about the slowdown in CPU evolution, the blame is often hastily assigned to the “physical limits of silicon.”</p>
<p>In reality, behind that simplified phrase lies a complex web of constraints arising from three distinct domains—physics, electronics, and computer science—which, when layered together, form a true technological glass ceiling.</p>
<p>It’s worth examining these limits narratively, weaving the three perspectives together, to understand why we can no longer rely on the periodic doubling of clock speed or transistor count to achieve higher performance.  </p>
<h3>From quantum mechanics to thermodynamics: unforgiving physics</h3>
<p class="p1">For decades, we benefited from what’s known as <span class="s1"><b>Dennard scaling:</b></span> shrink the channel length, lower the supply voltage, keep power density constant, and you get faster, more efficient chips.</p>
<p>That fairytale, however, ends around the 90 nm node, when voltage can no longer decrease proportionally, and the heat generated per unit area begins to rise.</p>
<p>Meanwhile, the thinning of the gate oxide slips below one nanometer: at that point, electrons no longer “jump” over the barrier (the layer that couples the two doped semiconductors), but <span class="s1"><b>tunnel</b></span> directly through it.</p>
<p>This gives rise to leakage currents that waste energy even when the transistor is logically switched off.   </p>
<p class="p1">The most frequently cited thermodynamic constraint is <span class="s1"><b>Landauer’s limi</b></span>t: erasing a single bit of information at room temperature requires at least kBT ln 2 of energy—about 3 × 10⁻²¹ joules.</p>
<p>Today, we’re still several orders of magnitude above that limit, but we are rapidly approaching the bottom of the barrel. Each further reduction becomes painfully expensive in terms of materials, layout complexity, and process control. </p>
<p class="p1">The last, less glamorous but decisive protagonist is <span class="s1"><b>interconnection</b></span>. As copper wires shrink, their resistivity increases due to surface and grain boundary scattering. The RC delay of metal interconnects does not scale along with the transistor—on the contrary, it worsens. This is why clock frequencies have been stuck below the 5 GHz threshold for years: exceeding it would mean generating more heat than the package can dissipate.  </p>
<h3>Microfabrication and packaging: electronics challenges itself</h3>
<p class="p1">Process engineers, for their part, have responded with two major feats of ingenuity. The first is the evolution of device architecture: from <span class="s1"><b>planar CMOS</b></span> to <span class="s1"><b>FinFET</b></span>, and now to <span class="s1"><b>Gate-All-Around FETs</b></span> (nanosheets), which wrap the channel from all sides to improve electrostatic control. This approach works—but it introduces quantum confinement effects that degrade carrier mobility, eroding part of the expected performance gain.  </p>
<p class="p1">The second is shifting power delivery to the backside of the wafer—known as <span class="s1"><b>backside power delivery</b></span>—to reduce voltage drops. This is micrometer-scale surgery, and it comes with new challenges: through-silicon vias (TSVs) that add parasitic capacitance, and more critically, vertical thermal gradients that can exceed 40 K per millimeter. This is why, in parallel, <span class="s1"><b>3D-IC </b></span>chiplet integration is gaining momentum: if we can’t spread transistors out across a surface anymore, we stack them. But a three-dimensional chip brings with it a puzzle of cooling, cache coherence, and clock distribution that keeps designers up at night.   </p>
<h3><b>Architecture and software: when the bottleneck lies in the algorithm</b></h3>
<p class="p1">Computer scientists are not standing still. They’ve long since exhausted the performance gains from out-of-order execution and instruction-level parallelism (ILP). Increasing pipeline width beyond six to eight simultaneous instructions yields diminishing returns, as data dependencies and conditional branches choke the flow. The response is to shift progress toward <span class="s1"><b>massive parallelization</b></span> and heterogeneity: small and large cores on the same die, integrated GPUs, dedicated tensor accelerators.   </p>
<p class="p1">Here, however, another barrier emerges: the so-called <span class="s1"><b>memory wall</b></span>. The ALU performs operations in just a few picoseconds, but reading from DRAM layers takes around 50 nanoseconds—a thousand times longer—quickly consuming any computational advantage achieved. As a result, more chip area is now dedicated to cache than to compute units, at the cost of enormous complexity in maintaining coherence and of algorithms that must be data-locality aware from the very earliest stages of design.  </p>
<p class="p1">Here lies the paradox: we can add thousands of cores, but silicon is now <span class="s1"><b>dark</b></span>—only a fraction of the chip can be powered on simultaneously without overheating it. Programming for this fragmented universe requires models like OpenMP, SYCL, or asynchronous task programming, and above all, a new mindset: </p>
<h3><b>The future beyond silicon?</b></h3>
<p class="p1">Some are betting on two-dimensional materials—graphene, MoS₂—which promise orders of magnitude in electron mobility, but are still far from large-scale manufacturing due to unstable band gaps and immature deposition processes. Others look to <span class="s1"><b>spintronics</b></span> for ultra-fast non-volatile memories. In the short term, the realistic trajectory points to <span class="s1"><b>hardware-software co-design</b></span>, 3D chiplets, and even lower supply voltages—possibly assisted by adiabatic logic or reversible circuits to squeeze out a few more orders of magnitude in energy efficiency.  </p>
<p class="p1">The end of “discount silicon” is not marked by a single wall, but by a cascade of obstacles. Thermodynamic limits set the ultimate threshold; interconnect bottlenecks and process variability raise the cost of every additional nanometer; memory bandwidth and energy efficiency become as much a software problem as a hardware one. Understanding this integrated complexity not only explains why we won’t see 10 GHz CPUs in tomorrow’s laptops, but also points to where research must focus: in the tight symbiosis of physicists, electronic engineers, and computer scientists, united in the hunt for any remaining margin of freedom in a world rapidly approaching its fundamental limits.  </p>
<p>What will be the solution to the problems, in my view, at least in the initial phase in the enterprise world?</p>
<h2>The quantum computer</h2>
<p class="p1">Quantum computers represent the only foreseeable platform capable of surpassing, for specific classes of problems, the thermodynamic and architectural limits of silicon. But this is not an immediate leap: it will take a decade of <span class="s1"><b>materials science</b></span>, <span class="s1"><b>cryogenic engineering</b></span>, and <span class="s1"><b>algorithmic formalization</b></span> to reach the “logical scale” truly needed to replace or integrate classical HPC in cutting-edge applications—from cancer drug design to post-quantum cryptography. Preparing now, with cross-disciplinary skills and hybrid projects, means being ready when the qubit finally becomes the new transistor.  </p>
<h3>What is a qubit – let’s clarify</h3>
<p class="p1">Imagine a coin balanced on its edge: it is neither heads nor tails, yet it holds both possibilities until it falls. A <span class="s1"><b>qubit</b></span> is born from a similar concept, but applied to quantum mechanics: it is the elementary unit of information that can exist in a simultaneous combination of the “0” and “1” states. This condition of controlled ambiguity is called <span class="s1"><b>superposition</b></span>, and it opens up computational possibilities that traditional bits—fixed at a single value at any given time—cannot even come close to touching.  </p>
<h4>Superposition: parallelism in wave amplitudes</h4>
<p class="p1">With a classical bit, we program sequential instructions: first 0, then 1. In a qubit, the two values coexist as amplitudes that describe the probability of the system being observed in one result or the other. During computation, the qubit explores both paths simultaneously—a form of parallelism that doesn’t depend on the number of cores or the clock speed of the processor.  </p>
<h4>Entanglement: connecting qubits beyond classical physics</h4>
<p class="p1">By linking two or more qubits, <span class="s1"><b>entanglement</b></span> is created—a profound connection whereby the measurement outcome of one instantly affects the other, even if they are separated by kilometers. From this property arise breathtaking computational accelerations, because a register of n qubits can simultaneously address a set of possibilities that a classical computer would need to explore one by one. </p>
<h4>How do you “program” a qubit?</h4>
<p class="p1">Logical operations—equivalent to NOT or AND gates in traditional chips—become <span class="s1"><b>state rotations</b></span>: targeted pulses—microwaves, lasers, or magnetic fields depending on the technology—manipulate the qubit, causing it to oscillate among its internal combinations. Designing a quantum algorithm means orchestrating sequences of these rotations to concentrate, through interference, the probability of obtaining the correct answer upon measurement. </p>
<h3>Not all that glitters is gold</h3>
<p class="p1">Superposition is fragile: vibrations, electromagnetic fields, even a single stray photon can cause the qubit to collapse into a classical value—a phenomenon known as <span class="s1"><b>decoherence</b></span>. To prevent the information from evaporating: </p>
<ul>
<li>Let&#8217;s isolate the hardware (cryogenics, ultra-high vacuum, shielding)</li>
<li>We shorten operation times: the fewer microseconds that pass, the fewer chances noise has to interfere.</li>
<li>We use redundancy: groups of physical qubits monitor each other’s errors, giving rise to a more robust “logical” qubit.<br />This redundancy is currently the heaviest cost factor: it takes dozens—sometimes hundreds—of physical qubits to obtain a single reliable one. </li>
</ul>
<p>The qubit represents the most ambitious bet of the post-silicon era: a tiny unit of information that can be simultaneously “here” and “there,” and entangle with other units in ways that defy classical intuition.</p>
<p>Harnessing it means venturing into a territory where physics, electronic engineering, and computer science converge into a single technological landscape.<br />And it is this very convergence that could give rise to the next true revolution in computing.  </p>
<p>&nbsp;</p>
<h2>Microsoft Majorana 1 – The topological qubit with which Microsoft aims to break the glass ceiling</h2>
<p class="p1">We’ve seen that superposition and entanglement make the qubit incredibly powerful… but also terribly fragile. Noise, heat, and complex wiring turn every step forward into a chess match against the laws of physics. With <span class="s1"><b>Majorana 1</b></span>, Microsoft is trying to move the chessboard itself: it introduces a <span class="s1"><b>topological</b></span> qubit based on <i>Majorana Zero Modes</i> (MZM) that, by design, is far less vulnerable to the factors that plague current quantum platforms.  </p>
<h4>Decoherence under control thanks to topological protection</h4>
<p class="p1">In conventional architectures, information resides “on site”: a local disturbance is enough to cause the qubit to collapse. In Majorana 1’s InAs-Al nanowire, the logical state is distributed between two Majorana quasiparticles positioned at the wire’s ends. Any noise that affects only one end <span class="s1"><b>cannot</b></span> alter the overall parity, so loss of coherence would require a simultaneous event on both ends—an event with dramatically lower probability. The promised result is a coherence time measured in tens of milliseconds, compared to the few tens of microseconds typical of superconducting transmon qubits.   </p>
<h4>Gate errors an order of magnitude lower</h4>
<p class="p1">Logical operations do not rely on ultra-precise analog pulses, but on sequences of braidings or parity measurements involving four Majorana modes.<br />Topological physics inherently dampens small amplitude and phase inaccuracies, aiming for error rates on the order of one in ten thousand.</p>
<p>In practice, this means the error correction system has less work to do, and the need for redundant qubits is significantly reduced.  </p>
<h4>Fewer physical qubits per logical qubit</h4>
<p class="p1">In classical surface codes, hundreds of physical qubits are required to obtain a single reliable logical qubit.<br />The native protection of Majorana 1 reduces this overhead to roughly <span class="s1"><b>one in a hundred</b></span>: this means that a processor with one million physical qubits could provide thousands of usable qubits, not just a few dozen. </p>
<h4>Simpler wiring and cryogenics</h4>
<p class="p1">Read and write operations occur at lower frequencies compared to the microwaves used in transmons; as a result, the number of RF lines entering the cryostat drops significantly. Fewer cables mean less thermal load and a more scalable architecture. Microsoft envisions <span class="s1"><b>“H-shaped” tiles </b></span>that allow thousands of topological qubits to be packed onto a single die and then stacked in 3D—without a tangled mess of connectors.  </p>
<h4>Industry roadmap compatible with EUV lithography</h4>
<p class="p1">The “topoconductor” of Majorana 1 is fabricated using techniques similar to those employed in advanced 2 nm nodes: epitaxial deposition of the nanowire, EUV patterning for the aluminum contacts, and 3D interposers to connect to the control logic at 4 K. This means that, if the prototype holds up under laboratory testing, the manufacturing infrastructure already exists to scale it up.</p>
<h4>Why follow Microsoft’s insights on Majorana 1?</h4>
<p class="p1">If the bet succeeds, <span class="s1"><b>Majorana 1</b></span> will offer a qubit that is less noisy, more scalable, and already partially fault-tolerant even before applying traditional error-correcting codes. In other words, it will do for quantum computing what the MOSFET did for classical electronics: turn a lab prototype into a repeatable industrial building block. </p>
<p class="p1">It’s not yet the magic wand that solves every problem, but it represents a paradigm shift: instead of fighting noise with increasingly complex layers of error correction, Microsoft <span class="s1"><b>sidesteps</b></span> it by designing the qubit so that noise simply has nowhere to take hold. If the model holds, the path to the millions of logical qubits needed to revolutionize chemistry, cryptography, and global optimization could be shortened by many years. </p>
<h2>All wonderful—if it weren’t for the fact that…</h2>
<h3>Quantum software is not classical software</h3>
<p>AI will have to wait. Laboratories are racing to stabilize quantum hardware, but the other half of the game is played in the programming paradigm.</p>
<p>A qubit-based computer, even when fully reliable, won’t speak x86 assembly, won’t support a traditional operating system, and won’t execute loops and conditionals in the conventional way.  </p>
<p>With silicon, we’re used to the “fetch-decode-execute” model: the CPU fetches an instruction, executes it, then reads from or writes to memory.</p>
<p>In a quantum processor, the program is monolithic: a fixed sequence of gates is defined before execution; the qubit cannot be continuously read and written without destroying its state.</p>
<p>Loops are simulated by duplicating portions of the circuit—not through dynamic jumps.<br />There is no erasable memory: every operation must be reversible, or it must end with a measurement that collapses the affected qubits, causing the loss of their superposition.  </p>
<p>We will have entirely new development stacks—Python, C#, and Java will likely be replaced by Q#.</p>
<p>Even once we have stable qubits, we’ll still need to wrap them in surface codes: every logical operation will become a delicate dance of hundreds of physical operations.</p>
<p>As a result, the software will need to schedule millions of gate operations without accumulating delay, manage fast measurement and correction cycles that depend on classical processors located ultra-close to the cryostats, and insert ancilla qubits that don’t even appear in the high-level code. </p>
<p>This overhead means that error decoding routines occupy a large portion of the controller’s compute time, reducing the window available for the application’s actual useful work.</p>
<h3>Why will AI be slow to arrive on quantum computers?</h3>
<p>At the moment, neural networks have two key figures: millions of parameters and billions of multiplications per second.<br />To transform them into quantum circuits, we need: </p>
<ul>
<li>Tens of thousands of error-corrected logical qubits to represent tensors through quantum factorization</li>
<li>Circuit depth (number of gate levels) on the order of millions to emulate activation functions, normalization, and backpropagation.</li>
</ul>
<p>Today, we have only a few logical qubits and a few hundred levels of tolerable circuit depth.</p>
<p>The quantum machine learning algorithms that show promising advantages—such as quantum kernels, Boltzmann state sampling, or speedups in combinatorial problems—function as co-processors: they accelerate a specific step within a workflow that remains predominantly classical, running mainly on GPUs and CPUs. </p>
<p class="p1">Bringing artificial intelligence to quantum hardware is a marathon, not a sprint. The qubit must be programmed with a new language, using control infrastructures that operate at cryogenic temperatures, and a software “orchestrator” that incorporates error correction and topological mapping.</p>
<p>Until we have thousands of logical qubits and compilers capable of hiding this intricate ecosystem, AI will remain firmly anchored to GPUs and TPUs.</p>
<p>In the meantime, however, experimenting with hybrid approaches—using qubits as accelerators for targeted tasks—is the best way to prepare for the day when quantum computing moves from a promise to a general-purpose platform.   </p>
<h1>What will happen when we can run AI on quantum computers?</h1>
<p>We will definitely get there—it’s only a matter of time. It will likely take around 10 years, but even now, artificial intelligence is already making a significant contribution.</p>
<p>For example, just a few days ago, a Swedish team discovered a method—thanks to the help of artificial intelligence—to perform clinical analyses using urine to detect prostate cancer in men at its earliest stages.</p>
<p class="p1">The example of <span class="s1"><b>early prostate cancer diagnosis</b></span> developed in Stockholm is just a taste of what could happen when AI gains access to quantum hardware. In that project, neural networks sifted through thousands of tumor transcriptome profiles and, by cross-referencing them with urine samples, identified a panel of biomarkers that achieves 92% accuracy—far exceeding that of the current PSA test. The protocol is set to enter clinical trials involving 250,000 patients over the next eight years. AI is also already being used to read CT scans, X-rays, and its “eye” is proving to be remarkably accurate.  </p>
<p>But what will happen when we have virtually unlimited computational power at our disposal?</p>
<p>Naturally, we now have to venture into uncharted territory—one of pure imagination…</p>
<p>Artificial intelligence will likely be able to find a personalized cure for cancer.</p>
<p class="p1">With a <span class="s1"><b>fault-tolerant</b></span> quantum computer, the next step will be to simulate—at the level of electronic interactions—how each specific patient mutation alters the conformation of proteins involved in carcinogenesis. Today, a supercluster takes weeks to model just a few dozen atoms; a quantum solver could do it on entire enzyme binding pockets in hours, delivering in near real-time the small molecule best suited to block them.<br />Proof-of-concept studies already exist demonstrating anticancer compounds generated using hybrid quantum–classical workflows. </p>
<p>Shall we summarize in simple terms?</p>
<ul>
<li>
<p class="p1"><span class="s1"><b>Ultra-precise diagnoses</b></span> from blood or urine, guided by AI.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>Screening of millions of molecules</b></span>, without animal testing.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>“Tailor-made” therapy</b></span> crafted around the individual patient’s genetic signature.</p>
</li>
</ul>
<p>But AI combined with quantum computing won’t bring benefits only in the medical field.</p>
<h3>Robotics: swarms and manipulation optimized by quantum computing</h3>
<p class="p1">Motion planning for an industrial arm—or worse, for a swarm of drones—is a combinatorial optimization problem that scales explosively with the number of degrees of freedom. Quantum algorithms such as th<i>e Quantum Approximate Optimization Algorithm </i>(QAOA) are already showing significant reductions in computation time for scenarios involving multi-robot path planning and coverage of complex environments. <span class="Apple-converted-space">  </span>  . When latencies drop below milliseconds, we will be able to have:</p>
<ul>
<li>
<p class="p1"><span class="s1"><b>Autonomous warehouses</b></span> where hundreds of AMRs (Autonomous Mobile Robots) instantly recalculate their routes when a human bottleneck appears between the shelves.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>Rescue drones</b></span> capable of replanning the exploration of collapsed buildings in under a second, reducing the time needed to locate survivors.</p>
</li>
</ul>
<h3>Economy: real-time decision-making with markets that never close</h3>
<p class="p1">Portfolio building, hedging, and risk management are driven by covariance matrices which, for a given number of assets, grow quadratically.</p>
<p>In 2025, IQM and DATEV have already demonstrated that a prototype with just a few dozen qubits can produce portfolios that are 3% more efficient than classical methods at the same level of risk. <span class="Apple-converted-space">  </span>. Moody’s, in its annual report, forecasts the first “alpha-generating” adoption within three years, specifically in currencies and complex derivatives.<span class="Apple-converted-space">  </span>. At full scale, the AI + QC combination will be able to:</p>
<ul>
<li>
<p class="p1"><span class="s1"><b>Optimize portfolios of thousands of assets</b></span> over time horizons of just a few minutes, not end-of-day.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>Simulate macroeconomic shocks</b></span> using a quantum stochastic model, improving the resilience of pension funds.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>Reduce fraud and insider trading</b></span> through quantum pattern matching on real-time transaction streams.</p>
</li>
</ul>
<h3>Materials science and fundamental research</h3>
<p class="p1">Australia’s victory at the 2024 Gordon Bell Prize showed that achieving “lab-grade” accuracy in the simulation of biochemical systems is possible—but it requires exascale computing and weeks of processing. With quantum hardware, these simulations will become routine: </p>
<ul>
<li>
<p class="p1"><span class="s1"><b>Solid-state batteries</b></span> optimized by computing ion diffusion in lattices of hundreds of unit cells in just a few minutes.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>Green catalysts</b></span> for ammonia production at room temperature, reducing the CO₂ footprint of the entire fertilizer supply chain.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>Multiscale climate forecasts</b></span> where AI trains local models and quantum kernels solve Navier-Stokes equations on selected turbulent domains.</p>
</li>
</ul>
<h3>Astronomy and cosmology: new eyes on the universe</h3>
<p class="p1">The search for Earth-like exoplanets requires sifting through terabytes of light curves to detect just a few shadow photons. Variational quantum models are already classifying Kepler data with higher precision than classical algorithms. Looking ahead:  </p>
<ul>
<li>
<p class="p1">Real-time optimized <span class="s1"><b>telescope scheduling</b></span>: selecting where to point an interferometric array based on atmospheric conditions and dynamic scientific opportunities.</p>
</li>
<li>
<p class="p1"><span class="s1"><b>Streaming analysis of gravitational waves</b></span> using quantum neural networks capable of identifying signals buried in noise that elude traditional pipelines.</p>
</li>
</ul>
<h2>Conclusion</h2>
<p class="p1">The arrival of fault-tolerant quantum computers won’t replace the good old CPU—rather, it will unlock for artificial intelligence those computational margins that are currently out of reach, allowing it to explore entire solution spaces in a few hours that would otherwise take years of work—or remain simply unattainable.</p>
<p>The technological horizon is likely about a decade away, perhaps less; that’s why it is essential to start designing hybrid algorithms and workflows now, so that the software will be ready when the hardware is. </p>
<p class="p1">At that point, ethical, philosophical, and social questions will come into play: AI must not replace human labor, but rather become an ally capable of accelerating scientific discovery and reviving that upward curve of progress which currently seems to have flattened.</p>
<p>The real challenge will be to make all of this coherent, safe, and protected—ensuring that unprecedented computational power does not fall into hands willing to bend it toward destructive ends, much like the shift that turned Einstein’s formula—originally intended to describe energy—into the trigger for the atomic bomb.</p>
<p>To wisely govern this new frontier means ensuring that the quantum era becomes a multiplier of knowledge, not of risk.  </p>

<p><strong>[starbox] </strong></p>
<p>L'articolo <a href="https://renor.it/en/blog/artificial-intelligence-algorithms/ai-and-quantum-computers-the-next-leap-in-computing/">AI and Quantum Computers: the next leap in computing</a> proviene da <a href="https://renor.it/en/">RENOR &amp; Partners S.r.l.</a>.</p>
]]></content:encoded>
					
		
		
			</item>
	</channel>
</rss>
