<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Artificial Intelligence on MLLog.dev</title><link>https://mllog.dev/en/categories/artificial-intelligence/</link><description>Recent content in Artificial Intelligence on MLLog.dev</description><generator>Hugo -- 0.147.9</generator><language>en</language><lastBuildDate>Fri, 23 Jan 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://mllog.dev/en/categories/artificial-intelligence/index.xml" rel="self" type="application/rss+xml"/><item><title>Tensor Networks: A Mathematical Bridge Between Neural and Symbolic AI</title><link>https://mllog.dev/en/posts/tensor-networks-neuro-symbolic-ai/</link><pubDate>Fri, 23 Jan 2026 00:00:00 +0000</pubDate><guid>https://mllog.dev/en/posts/tensor-networks-neuro-symbolic-ai/</guid><description>&lt;p>Neural networks excel at learning patterns from data. Symbolic AI excels at logical reasoning and interpretability. For decades, researchers have tried to combine them — with limited success. A new paper proposes an elegant mathematical framework that unifies both approaches: &lt;strong>tensor networks&lt;/strong>. The key insight? Both neural and symbolic computations can be expressed as tensor decompositions, and inference in both reduces to tensor contractions.&lt;/p>
&lt;h2 id="the-problem-two-worlds-that-dont-talk">The Problem: Two Worlds That Don&amp;rsquo;t Talk&lt;/h2>
&lt;p>Modern AI is split into two camps:&lt;/p></description></item><item><title>Cost-Constrained LLM Cascades — Meet C3PO</title><link>https://mllog.dev/en/posts/llm-cascades-cost-constrained-c3po/</link><pubDate>Fri, 14 Nov 2025 00:00:00 +0000</pubDate><guid>https://mllog.dev/en/posts/llm-cascades-cost-constrained-c3po/</guid><description>&lt;p>Imagine you have an army of helpers — several different Large Language Models (LLMs), each capable of handling tasks from simple queries to complex reasoning.&lt;br>
But each helper &lt;em>costs&lt;/em> something: time, compute, or actual money if you&amp;rsquo;re using an API.&lt;/p>
&lt;p>So the question is:&lt;br>
Can we orchestrate these models wisely — starting from the cheapest one that might do the job, escalating only when needed — &lt;strong>without exceeding a cost budget&lt;/strong>?&lt;/p></description></item></channel></rss>