<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Anthony Butler]]></title><description><![CDATA[Blog about emerging technology, govtech, fintech, and Saudi Arabia]]></description><link>https://abutler.com/</link><generator>Ghost 5.60</generator><lastBuildDate>Tue, 21 Apr 2026 09:19:56 GMT</lastBuildDate><atom:link href="https://abutler.com/blog/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[From Hierarchy to Intelligence: How AI Rewrites the Theory of the Firm]]></title><description><![CDATA[<p>In 1937, Ronald Coase <a href="https://en.wikipedia.org/wiki/The_Nature_of_the_Firm?ref=abutler.com" rel="noreferrer">posed</a> a question that still underpins modern economics: if markets are efficient, why do firms exist at all? His answer was simple but profound. Firms exist because coordination through the market is costly. Negotiating contracts, discovering prices, aligning incentives, and managing uncertainty all impose friction. The</p>]]></description><link>https://abutler.com/from-hierarchy-to-intelligence-how-ai-rewrites-the-theory-of-the-firm/</link><guid isPermaLink="false">69ccc9072ed91304dc752d3f</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Wed, 01 Apr 2026 07:42:42 GMT</pubDate><content:encoded><![CDATA[<p>In 1937, Ronald Coase <a href="https://en.wikipedia.org/wiki/The_Nature_of_the_Firm?ref=abutler.com" rel="noreferrer">posed</a> a question that still underpins modern economics: if markets are efficient, why do firms exist at all? His answer was simple but profound. Firms exist because coordination through the market is costly. Negotiating contracts, discovering prices, aligning incentives, and managing uncertainty all impose friction. The firm, therefore, is not a natural construct but an attempt at optimisation: an internalisation of coordination to reduce transaction costs. Hierarchy, management, and process are not cultural artefacts; they are merely mechanisms designed to manage the flow of information and decisions under constraint.</p>
<p>Decades later, <a href="https://en.wikipedia.org/wiki/Conway%27s_law?ref=abutler.com" rel="noreferrer">Conway&#x2019;s Law</a> added another layer of insight. Organisations, Melvin Conway argued, design systems that mirror their communication structures. This observation is often interpreted narrowly in software architecture, but its implications are broader. It suggests that the structure of a firm is not only a response to coordination costs but also a determinant of the systems it produces. Monolithic organisations produce monolithic systems; fragmented teams produce distributed architectures. The boundary between organisational design and technical design is porous because both are shaped by the same underlying constraint: how information moves.</p>
<p>For most of the twentieth century and into the early twenty-first, this constraint has remained largely unchanged. Humans act as the primary processors of organisational state. Information is collected, summarised, passed upwards, interpreted, and redistributed. Decisions are made through layers. Even in highly digitised enterprises, the core model persists. Dashboards, reports, workflow systems, and enterprise software have not eliminated the need for hierarchy; they have merely made it more efficient. The fundamental problem of how to coordinate complex activity across many actors has remained intact and unchallenged.</p>
<p>Artificial intelligence changes this in a way that is easy to underestimate. Much of the current discourse frames AI as a productivity tool, a way to make individuals more efficient. This framing is correct but perhaps incomplete. I believe that a much more significant shift is not at the level of individual productivity but at the level of coordination. AI introduces the possibility of maintaining a real-time, continuously updated model of an organisation&#x2019;s state. Not a static report or a lagging dashboard, but a living representation of work in progress, dependencies, constraints, and external signals. More importantly, it introduces the ability to act on that model.</p>
<p>This is the transition from assistance to orchestration. Traditional tools support individuals in performing tasks. AI systems can coordinate tasks across individuals and teams, dynamically allocating work, resolving dependencies, and prioritising actions based on a global view. The functions historically performed by layers of management, such as aggregating information, routing decisions, and synchronising effort, become, in some sense, computational problems rather than organisational ones.</p>
<p>Seen through a Coasean lens, this has immediate implications. If firms exist to reduce the cost of coordination, then a dramatic reduction in those costs should change the structure, and size, of the firm itself. As coordination becomes cheaper, the need to internalise it through hierarchy diminishes. The boundary of the firm, which Coase described as expanding or contracting based on transaction costs, becomes fluid in a new way. However, this is not a simple reversion to market-based coordination. Instead, coordination is internalised in a different form: software-defined intelligence systems that operate with near-zero marginal cost.</p>
<p>Conway&#x2019;s Law also begins to invert under these conditions. If systems historically mirrored organisational structure, what happens when the system itself becomes the primary coordinating mechanism? The relationship flips. Instead of organisations shaping systems, systems shape organisations in a far more direct and dynamic way. The architecture is no longer a reflection of communication patterns; it becomes the substrate through which communication and coordination occur. The organisation becomes, in effect, an emergent property of the system itself.</p>
<p>I believe this has implications for how we think about management. Much of what is labelled as management today is, in practice, coordination work. It involves gathering context, making trade-offs, assigning resources, and ensuring alignment across functions. These activities are structured responses to the limitations of human information processing. When those limitations are relaxed, the necessity of these roles comes into question. This does not imply that leadership disappears. Judgement, direction-setting, cultural cohesion, and accountability remain fundamentally human concerns, but the mechanical aspects of coordination, the routing layer of the firm, are increasingly automatable.</p>
<p>The result is not simply a flatter organisation, although that may be a visible effect. It is a different organisational model altogether. Instead of static hierarchies and predefined workflows, we move towards systems that continuously adapt based on real-time information. Work is not assigned through fixed reporting lines but allocated dynamically based on context. Decision-making is not escalated through layers but resolved through a combination of local judgement and system-level optimisation. Structure becomes fluid, and the organisation behaves less like a machine and more like a responsive system.</p>
<p>This shift also challenges many of the assumptions embedded in enterprise software and operating models. Much of today&#x2019;s infrastructure is built around the idea that information is incomplete, delayed, and fragmented. Processes are designed to compensate for this through checkpoints, approvals, and escalation paths. If information becomes continuously available and coordination can be automated, these compensatory mechanisms become sources of friction rather than enablers of control. The architecture of the firm must change accordingly.</p>
<p>What emerges is a new conception of the firm as an intelligence system rather than a hierarchy. At its core is a layer that maintains a global view of state and orchestrates activity across the organisation. Around it are human actors who contribute judgement, creativity, and domain expertise. The relationship between the two is not one of tool and user but of system and participant. The system does not merely support work; it shapes and directs it.</p>
<p>The broader implication is that we are moving beyond the constraints that defined both Coase&#x2019;s theory of the firm and Conway&#x2019;s observation about system design. For nearly a century, organisations have been structured around the limitations of human coordination. AI does not eliminate the need for coordination, but it transforms its economics and its implementation. As a result, the fundamental rationale for hierarchy begins to erode.</p>
<p>The question, therefore, is not whether AI will improve existing organisations, but whether organisations built on pre-AI assumptions can remain competitive. Firms that continue to treat coordination as a human-centric, hierarchical process will carry structural inefficiencies that others do not. Those that reconfigure themselves around intelligence-driven coordination will not simply operate more efficiently; they will operate differently.</p>
<p>In this sense, the firm is being redefined. It is no longer just a boundary within which transactions are organised, nor a structure that shapes the systems it produces. It is becoming a dynamic, software-mediated entity in which coordination is continuous, adaptive, and largely automated. Coase explained why firms exist. Conway explained why they take the form they do. The emergence of AI suggests that both the reason and the form are now subject to change.</p>]]></content:encoded></item><item><title><![CDATA[From AutoResearch to Proof-of-Improvement: Decentralising Optimisation and Discovery]]></title><description><![CDATA[<p>Autonomous research systems, such as Andrej Karpathy&#x2019;s&#xA0;<a href="https://github.com/karpathy/autoresearch?ref=abutler.com" rel="noreferrer"><em>autoresearch</em></a>, demonstrate a simple yet powerful paradigm for iterative model improvement.  In this framework, an agent modifies a training script, executes short experiments, evaluates performance against a defined metric, and retains only those changes that yield measurable improvements.  While minimal</p>]]></description><link>https://abutler.com/from-autoresearch-to-proof-of-improvement-decentralising-optimisation-and-discovery/</link><guid isPermaLink="false">69bfaa3f2ed91304dc752ce5</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sun, 22 Mar 2026 09:03:05 GMT</pubDate><content:encoded><![CDATA[<p>Autonomous research systems, such as Andrej Karpathy&#x2019;s&#xA0;<a href="https://github.com/karpathy/autoresearch?ref=abutler.com" rel="noreferrer"><em>autoresearch</em></a>, demonstrate a simple yet powerful paradigm for iterative model improvement.  In this framework, an agent modifies a training script, executes short experiments, evaluates performance against a defined metric, and retains only those changes that yield measurable improvements.  While minimal in implementation, this structure captures an important computational asymmetry: discovering improvements is often resource-intensive, whereas verifying the quality of a given improvement is comparatively inexpensive.</p>
<p>This asymmetry is not unique to machine learning.  It closely resembles classes of problems I studied in operations research, where identifying optimal or near-optimal solutions is computationally difficult, but validating candidate solutions is straightforward.  For example, in supply chain optimisation, determining optimal routing, inventory allocation, and replenishment strategies across a network is complex; however, evaluating a proposed solution in terms of cost, service levels, and constraint satisfaction is deterministic.  Similarly, in scheduling problems, constructing feasible schedules across multiple constraints is challenging, whereas verifying feasibility and computing objective values is relatively trivial.  In financial optimisation, exploring portfolio allocations under constraints is computationally intensive, but verifying risk, return, and compliance with constraints is direct.</p>
<p>The&#xA0;<em>autoresearch</em>&#xA0;framework effectively operationalises this asymmetry in the context of machine learning by structuring the problem as a search over program space &#x2013; encompassing code, hyperparameters, and architectures &#x2013; paired with deterministic evaluation. Only those candidate modifications that improve the objective function are retained, creating a localised process of iterative optimisation.</p>
<p>This structure suggests a natural extension beyond a single execution environment.  Rather than a single agent performing sequential experimentation, one may consider a distributed setting in which multiple independent agents propose candidate modifications, a shared evaluation function is applied, and results are verified by independent parties.  In such a system, accepted improvements can be recorded in a shared state, transforming the process from a local loop into a coordinated protocol.</p>
<p>Under this formulation, the system begins to resemble a distributed ledger, albeit with a distinct objective.  Instead of ordering financial transactions, the system orders improvements.  Each accepted result constitutes a commit that references a prior state, incorporates a defined modification (e.g. a code change and resulting model checkpoint), and is associated with a measurable change in performance. Unlike linear blockchains, however, the structure is more naturally represented as a <a href="https://en.wikipedia.org/wiki/Directed_acyclic_graph?ref=abutler.com" rel="noreferrer">directed acyclic graph (DAG)</a>, where multiple branches of experimentation coexist, and superior results propagate through selection mechanisms.</p>
<p>The central primitive in such a system may be described not as Proof of Work or Proof of Stake, but as&#xA0;<em>Proof-of-Improvement</em>.  A submission is considered valid if it demonstrably improves a predefined metric, can be reproduced under specified conditions, and is evaluated deterministically within acceptable tolerances.  This replaces traditional hash-based proofs with proofs grounded in useful computation.</p>
<p>This reframes discovery as a consensus problem: not agreeing on state, but <em>agreeing on progress</em>.</p>
<p>To operate in an environment of untrusted participants, the system can be decomposed into three functional layers:</p>
<ul><li>First, an <em>execution layer</em>, in which agents perform experiments and generate candidate improvements off-chain;</li><li>Second, a <em>verification layer</em>, where submitted results are independently re-evaluated to confirm claimed performance;</li><li>Third, a <em>coordination layer</em>, in which commitments, such as hashes of code, datasets, and model artefacts, are recorded, experiment lineage is tracked, and rewards or attribution are assigned.  The use of content-addressed artefacts ensures integrity and traceability across all components;</li></ul>
<p>A critical aspect of this design is the incentivisation of the verification layer.  While verification is computationally cheaper than discovery, it is not costless, and therefore requires explicit economic alignment.  Validators may be required to stake resources when attesting to the correctness of a submission, receiving rewards for accurate validation and incurring penalties in cases of incorrect or fraudulent attestations.  Similarly, mechanisms for third-party challenge can be introduced, whereby participants are rewarded for identifying invalid results. In this way, verification is transformed into an economically secured process, ensuring that truthful validation is the dominant strategy even in the presence of untrusted participants.</p>
<p>The principal challenges in such a system are not computational but structural. These include preventing metric manipulation and overfitting, maintaining the integrity of evaluation datasets (including the use of hidden or rotating test sets), managing non-determinism in execution environments, coordinating validation without centralised trust, and controlling the expansion and convergence of the experimental search graph.  These challenges are analogous to those encountered in large-scale optimisation and distributed systems, where incentives, constraints, and feedback loops must be carefully designed to ensure stability and efficiency.</p>
<p>An additional extension of this framework involves the application of zero-knowledge techniques.  In this context, participants could, in principle, provide cryptographic proofs that a computation was executed correctly without revealing the full details of the computation or underlying data.  For example, it may be possible to prove that a model evaluation was conducted on a committed dataset or that a reported metric corresponds to a specific code and input configuration.  While such techniques remain computationally intensive for full training processes, they are increasingly applicable to evaluation steps and constrained verification tasks, thereby reducing trust assumptions in specific components of the system.</p>
<p>Overall, this approach reframes model development as a distributed search process over program space, coordinated through reproducibility, cryptographic commitments, and incentive mechanisms.  Experiments function as transactions, improvements as consensus, and computational effort as a form of stake.  Systems such as&#xA0;<em>autoresearch</em>&#xA0;provide a minimal illustration of this paradigm; however, the underlying concept is more general. When extended into a distributed setting, it yields a protocol for coordinating discovery itself, aligning closely with established principles in operations research and large-scale optimisation.</p>
<p>The implication is that intelligence is no longer solely engineered within institutions, but coordinated across a distributed system &#x2013; where progress itself becomes a matter of consensus.</p>]]></content:encoded></item><item><title><![CDATA[Cryptographic Dead-Man Switches for Sovereign Infrastructure]]></title><description><![CDATA[<h2 id></h2>
<p>Recent attacks targeting data center infrastructure in the UAE and Bahrain highlight an uncomfortable reality:&#xA0;digital infrastructure is increasingly becoming a geopolitical target.</p>
<p>Modern states depend deeply on digital systems. Government services, financial infrastructure, national registries, and increasingly artificial intelligence systems all run on large-scale compute platforms. If the</p>]]></description><link>https://abutler.com/cryptographic-data-embassies-verifiable-sovereign-recovery/</link><guid isPermaLink="false">69b1cea72ed91304dc752c56</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Wed, 11 Mar 2026 20:43:42 GMT</pubDate><content:encoded><![CDATA[<h2 id></h2>
<p>Recent attacks targeting data center infrastructure in the UAE and Bahrain highlight an uncomfortable reality:&#xA0;digital infrastructure is increasingly becoming a geopolitical target.</p>
<p>Modern states depend deeply on digital systems. Government services, financial infrastructure, national registries, and increasingly artificial intelligence systems all run on large-scale compute platforms. If the domestic infrastructure hosting these systems were destroyed or disabled, the consequences could be severe.</p>
<p>What makes this challenge particularly difficult is that the traditional assumptions behind disaster recovery no longer fully apply. Enterprise systems typically assume that failures are accidental: hardware breaks, networks fail, or natural disasters occur. National infrastructure must increasingly assume something else entirely &#x2014; that digital systems themselves may become deliberate targets in geopolitical conflict.</p>
<p>This changes the design problem fundamentally.</p>
<p>A government cannot simply replicate its most sensitive data abroad and rely on operational controls or legal agreements to protect it. Any infrastructure capable of restoring the digital state must also be assumed to exist in an environment where different legal authorities, intelligence services, and operational actors may have access to it.</p>
<p>In other words,&#xA0;resilience and sovereignty become tightly coupled architectural problems.</p>
<h2 id="the-data-embassy-dilemma">The Data Embassy Dilemma</h2>
<p>One natural response is geographic redundancy: replicate national data abroad so that services can be restored elsewhere. This idea underpins&#xA0;data embassies, where a nation stores critical digital assets in secure facilities located in trusted foreign jurisdictions.</p>
<p>Estonia&#x2019;s data embassy in Luxembourg is perhaps the best-known example of this approach. By placing encrypted government data in a friendly jurisdiction, the state ensures that critical services could continue operating even if domestic infrastructure were lost.</p>
<p>However, data embassies introduce a difficult tension.</p>
<p>To achieve resilience, sensitive data must exist outside the country. But once the data exists abroad,&#xA0;sovereignty becomes harder to guarantee. Even trusted partners operate under different legal systems, intelligence authorities, and operational realities.</p>
<p>The question therefore becomes:</p>
<blockquote><strong>Can sovereign data be replicated abroad while remaining cryptographically unusable unless domestic infrastructure is destroyed?</strong></blockquote>
<p>The answer lies in&#xA0;threshold cryptography.</p>
<h2 id="cryptographic-inertia-and-dead-man-switches">Cryptographic Inertia and Dead-Man Switches</h2>
<p>The key design principle is simple: offshore replicas of sovereign data should remain&#xA0;cryptographically inert&#xA0;during normal operations.</p>
<p>Foreign infrastructure may store encrypted data, but it should never possess the ability to decrypt it.  Only if domestic infrastructure disappears (suggesting catastrophic failure) should recovery become possible.  In effect, the system behaves like a&#xA0;cryptographic dead-man&#x2019;s switch&#xA0;for national infrastructure.</p>
<p>During normal operations, offshore copies of national data are effectively inert ciphertext. They can be stored, replicated, and protected, but they cannot be used.</p>
<p>Only when specific cryptographic conditions are satisfied can the data be unlocked and systems restored.</p>
<h2 id="a-simple-architecture">A Simple Architecture</h2>
<p>Consider a country operating a sovereign data center that hosts government systems. The data is replicated to several offshore locations acting as&#xA0;data embassies.</p>
<p>All data is encrypted using modern symmetric encryption such as AES-256.</p>
<p>Each dataset is encrypted with a&#xA0;Data Encryption Key (DEK). Those keys are then encrypted by a higher-level&#xA0;sovereign master key.</p>
<p>The structure therefore looks like this:</p>
<blockquote>Government Data<br>      &#x2193;<br>Encrypted with DEK<br>      &#x2193;<br>DEK encrypted with Sovereign Master Key</blockquote>
<p>This layered key structure is standard in modern cryptographic systems because it allows large datasets to be encrypted efficiently while keeping the ultimate control point (the master key) small and manageable.</p>
<p>The sovereign master key becomes the critical control point.</p>
<p>Rather than storing this key in a single location, it is divided using&#xA0;threshold cryptography.</p>
<p>For example, the master key might be split into five shares, with any three required to reconstruct it.</p>
<p>Those shares can be distributed across different authorities:</p>
<table data-start="4733" data-end="4918" class="w-fit min-w-(--thread-content-width)"><thead data-start="4733" data-end="4757"><tr data-start="4733" data-end="4757"><th data-start="4733" data-end="4744" data-col-size="sm" class>Location</th><th data-start="4744" data-end="4757" data-col-size="sm" class>Key Share</th></tr></thead><tbody data-start="4778" data-end="4918"><tr data-start="4778" data-end="4816"><td data-start="4778" data-end="4811" data-col-size="sm">Domestic sovereign data center</td><td data-col-size="sm" data-start="4811" data-end="4816">1</td></tr><tr data-start="4817" data-end="4849"><td data-start="4817" data-end="4844" data-col-size="sm">National cyber authority</td><td data-col-size="sm" data-start="4844" data-end="4849">1</td></tr><tr data-start="4850" data-end="4872"><td data-start="4850" data-end="4867" data-col-size="sm">Data embassy A</td><td data-start="4867" data-end="4872" data-col-size="sm">1</td></tr><tr data-start="4873" data-end="4895"><td data-start="4873" data-end="4890" data-col-size="sm">Data embassy B</td><td data-col-size="sm" data-start="4890" data-end="4895">1</td></tr><tr data-start="4896" data-end="4918"><td data-start="4896" data-end="4913" data-col-size="sm">Data embassy C</td><td data-start="4913" data-end="4918" data-col-size="sm">1</td></tr></tbody></table>
<p>Under normal conditions, domestic infrastructure holds enough shares to operate its systems.</p>
<p>The offshore sites possess encrypted data but do not have enough key shares to decrypt it.</p>
<p>This ensures the replicated data remains&#xA0;cryptographically locked, even if the infrastructure hosting it is compromised.</p>
<h3 id="catastrophic-recovery">Catastrophic Recovery</h3>
<p>To allow recovery when domestic infrastructure is lost, the system monitors the availability of sovereign systems.</p>
<p>Domestic infrastructure periodically produces&#xA0;cryptographically signed heartbeat signals&#xA0;indicating that it is operational.  As long as these signals continue, offshore systems cannot reconstruct the master key.</p>
<p>If the signals disappear for a defined period, perhaps hours or days, the system assumes catastrophic failure.  At that point, the offshore key holders can cooperate to reach the cryptographic threshold.</p>
<p>For example:</p>
<ul><li>Data embassy A</li><li>Data embassy B</li><li>National cyber authority</li></ul>
<p>Together these parties reconstruct the sovereign master key.</p>
<p>The encrypted DEKs can then be unlocked, allowing the replicated data to be restored and government systems to be restarted in offshore environments.</p>
<p>In effect, the digital state can be<strong>&#xA0;reconstituted abroad</strong>.</p>
<h2 id="why-threshold-cryptography-matters">Why Threshold Cryptography Matters</h2>
<p>Threshold cryptography is uniquely suited to this problem because it eliminates single points of trust.</p>
<p>No single institution, whether domestic or foreign, holds the full cryptographic authority required to decrypt national data.  Instead, control is distributed across multiple independent actors.</p>
<p>This ensures that:</p>
<ul><li>offshore infrastructure cannot unilaterally access sovereign data;</li><li>a compromised embassy cannot decrypt stored data;</li><li>a hostile jurisdiction cannot force disclosure of encryption keys;</li></ul>
<p>Only when the defined threshold of independent authorities cooperates can recovery occur.</p>
<p>In other words,&#xA0;control of national data becomes a distributed cryptographic process rather than an institutional privilege.</p>
<h2 id="sovereignty-through-mathematics">Sovereignty Through Mathematics</h2>
<p>Traditional data embassy models rely primarily on legal agreements and operational procedures. Those mechanisms remain important, but they ultimately depend on trust.</p>
<p>Threshold cryptography introduces something stronger.</p>
<p>Even trusted partners hosting the infrastructure cannot access sovereign data without cooperation from the home nation &#x2014; or without catastrophic loss of the domestic systems.</p>
<p>In this model, sovereignty is enforced not just by diplomacy, but by&#xA0;mathematics.</p>
<h2 id="the-next-layer-of-national-infrastructure">The Next Layer of National Infrastructure</h2>
<p>Historically, governments focused on protecting territory, airspace, and maritime routes.</p>
<p>Today they must also protect&#xA0;data and computation.</p>
<p>As digital infrastructure becomes strategically important, resilience must be built directly into the architecture of national systems.</p>
<p>Data embassies provide geographic redundancy but cryptography ensures sovereign control.</p>
<p>Together they offer a way for nations to ensure that&#xA0;their digital systems can survive catastrophe without surrendering sovereignty.</p>]]></content:encoded></item><item><title><![CDATA[The AI Productivity Paradox?]]></title><description><![CDATA[<p>Most people believe AI is already transforming workplace productivity. Tools like ChatGPT are everywhere. Workers are writing faster, coding more efficiently, and summarising dense documents in seconds. But here&#x2019;s a less popular, but increasingly important idea:</p>
<blockquote>The productivity gains from today&#x2019;s AI tools largely accrue to</blockquote>]]></description><link>https://abutler.com/the-ai-productivity-paradox/</link><guid isPermaLink="false">67e954982ed91304dc752965</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sun, 30 Mar 2025 14:40:40 GMT</pubDate><content:encoded><![CDATA[<p>Most people believe AI is already transforming workplace productivity. Tools like ChatGPT are everywhere. Workers are writing faster, coding more efficiently, and summarising dense documents in seconds. But here&#x2019;s a less popular, but increasingly important idea:</p>
<blockquote>The productivity gains from today&#x2019;s AI tools largely accrue to the employee&#x2014;not the firm.</blockquote>
<p>This isn&#x2019;t to say the gains aren&#x2019;t real. They are. But they&#x2019;re&#xA0;localised,&#xA0;private, and&#xA0;unstructured. An employee may finish a task in half the time, but that doesn&#x2019;t mean the firm gets twice the output. Often, the surplus is absorbed by inefficiencies elsewhere: meetings, distractions, idle time. The structure of production hasn&#x2019;t changed. Coordination costs remain high. Measurement is difficult. And critically, the firm struggles to capture, monitor, or even notice the marginal value being created.</p>
<p>This is not a new phenomenon. In the 1980s, Robert Solow famously observed:&#xA0;<a href="https://www.brookings.edu/articles/the-solow-productivity-paradox-what-do-computers-do-to-productivity/?ref=abutler.com" rel="noreferrer">&#x201C;You can see the computer age everywhere but in the productivity statistics.&#x201D;</a>&#xA0;The same paradox may be playing out again with generative AI&#x2014;ubiquitous adoption, but elusive aggregate gains.</p>
<p>To understand why, it helps to revisit the economics of the firm.</p>
<p>In&#xA0;<a href="https://en.wikipedia.org/wiki/The_Nature_of_the_Firm?ref=abutler.com" rel="noreferrer">The Nature of the Firm</a>,&#xA0;Ronald Coase&#xA0;asked: why do firms exist at all? His answer: to minimize&#xA0;transaction costs. When the cost of using the market&#x2014;finding prices, negotiating contracts, enforcing terms&#x2014;is too high, work is brought inside the firm. But AI tools like ChatGPT don&#x2019;t reduce the transaction costs that underpin the firm. They reduce task-level friction, not the broader cost of coordination, delegation, or integration. The firm&#x2019;s core logic remains intact.</p>
<p>Meanwhile,&#xA0;Jensen and Meckling&#x2019;s&#xA0;<a href="https://www.sciencedirect.com/science/article/pii/0304405X7690026X?ref=abutler.com" rel="noreferrer">theory of the firm</a> framed it as a nexus of contracts&#x2014;a structure riddled with&#xA0;principal-agent problems. Employees (agents) don&#x2019;t always act in the interests of the employer (principal), especially when incentives diverge or outputs are hard to monitor. When employees use AI tools, they decide how, when, and to what extent to apply them. Time saved isn&#x2019;t necessarily reinvested in higher output. It&#x2019;s often invisible to the firm. As a result,&#xA0;the productivity surplus is privatised.</p>
<p>From a growth theory lens, the current wave of AI tools behaves like&#xA0;labour-augmenting technology. They make individuals more efficient, but unless firms restructure workflows or alter their production function, the returns are captured by labour, not capital. This echoes the concerns of&#xA0;<a href="https://www.aeaweb.org/articles?id=10.1257%2Fjep.33.2.3&amp;ref=abutler.com" rel="noreferrer">Acemoglu and Restrepo</a>, who have argued that automation technologies which only augment labour often yield uneven growth and exacerbate inequality, without translating into broad-based productivity gains.</p>
<p>But this is where the story turns&#x2014;and where the next phase of AI evolution begins.</p>
<p>The rise of&#xA0;autonomous AI agents as task-completing, workflow-integrated systems represents a fundamentally different economic model. Agents don&#x2019;t assist workers; they&#xA0;replace discrete units of labour, executing end-to-end tasks without constant oversight. They&#x2019;re programmable, consistent, and auditable. More importantly, they&#x2019;re&#xA0;owned and controlled by the firm, not the employee.</p>
<p>These agents begin to behave like&#xA0;capital&#x2014;not merely enhancing labour but substituting for it. In doing so, they invert the current distribution of productivity gains. The surplus now accrues to the owner of the agent&#x2014;the firm&#x2014;not the individual worker. In Coasian terms, they reduce internal coordination costs. In Williamson&#x2019;s <a href="https://www.edegan.com/pdfs/Williamson%20(1988)%20-%20The%20Logic%20of%20Economic%20Organization.pdf?ref=abutler.com" rel="noreferrer">logic</a>, they lower the cost of managing bounded rationality and opportunism. And in Jensen and Meckling&#x2019;s framework, they eliminate agency risk entirely.</p>
<p>Here&#x2019;s where&#xA0;Hayek&#xA0;enters the picture.</p>
<p>In&#xA0;<a href="https://www.econlib.org/library/Essays/hykKnw.html?ref=abutler.com" rel="noreferrer">The Use of Knowledge in Society</a>, Hayek argued that the central planner cannot possess all the distributed knowledge needed to make efficient decisions. Markets, through price signals, aggregate and coordinate this dispersed information. But within firms&#x2014;where hierarchical planning replaces market signals&#x2014;information bottlenecks remain a problem. AI agents, however, begin to solve this. Properly integrated, they&#xA0;process local information automatically, act on it autonomously, and feed results back into a system where no human intervention is needed. They operate like embedded market participants within the firm&#x2019;s structure, effectively reducing Hayekian knowledge frictions internally.</p>
<p>This is why agents represent more than just another tool. They are&#xA0;a new kind of firm-native intelligence&#x2014;one that internalises knowledge, performs work, and closes the feedback loop in a way that&#x2019;s both scalable and measurable. They allow the firm to finally reconfigure its own production function, not just augment individual contributors. The result: not just faster work, but a different kind of firm.</p>
<p>So the emerging divide in AI is not between firms that adopt it and firms that don&#x2019;t&#x2014;it&#x2019;s between firms that&#xA0;use AI as a set of tools for workers, and firms that&#xA0;integrate AI as systems of autonomous agents. The former democratises productivity. The latter&#xA0;captures it.</p>
<p>If the first wave of AI empowered individuals, the second will empower organisations. That&#x2019;s when the productivity boom will finally show up in the data.</p>]]></content:encoded></item><item><title><![CDATA[The Sovereignty Imperative: Five Layers of AI Independence]]></title><description><![CDATA[<p>With artificial intelligence becoming increasingly important economically and politically, terms like &quot;Sovereign Cloud&quot; and &quot;Sovereign AI&quot; are frequently used in policy discussions.  However,  they often lack clear, comprehensive definitions. Many existing definitions&#x2014;particularly those promoted by technology vendors&#x2014;fall significantly short of what governments</p>]]></description><link>https://abutler.com/the-sovereignty-imperative-five-layers-of-ai-independence/</link><guid isPermaLink="false">67bf512e2ed91304dc752926</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Wed, 26 Feb 2025 17:40:43 GMT</pubDate><content:encoded><![CDATA[<p>With artificial intelligence becoming increasingly important economically and politically, terms like &quot;Sovereign Cloud&quot; and &quot;Sovereign AI&quot; are frequently used in policy discussions.  However,  they often lack clear, comprehensive definitions. Many existing definitions&#x2014;particularly those promoted by technology vendors&#x2014;fall significantly short of what governments should consider to establish genuine strategic control over AI.</p>
<p>True AI sovereignty extends far beyond data localisation or regulatory compliance. It encompasses a multi-layered framework that enables governments and organisations to maintain strategic autonomy while still participating in the global AI ecosystem. This framework identifies five interconnected layers of AI independence, each building upon the foundational elements of the previous layer.</p>
<p>By understanding these layers, policymakers and organisational leaders can make informed decisions about where sovereignty is essential, where collaboration is beneficial, and how to balance both to serve their strategic interests. This comprehensive approach allows entities to develop AI capabilities that align with their values, protect their interests, and serve their specific needs without unnecessary isolation from global innovation.</p>
<figure class="kg-card kg-image-card"><img src="https://abutler.com/content/images/2025/02/image-3.png" class="kg-image" alt loading="lazy" width="866" height="756" srcset="https://abutler.com/content/images/size/w600/2025/02/image-3.png 600w, https://abutler.com/content/images/2025/02/image-3.png 866w" sizes="(min-width: 720px) 720px"></figure>
<h2 id="layer-1foundation">Layer 1 - Foundation</h2>
<p>The bedrock of AI sovereignty through essential infrastructure and frameworks:</p>
<ul><li><strong>Legal and Regulatory Control</strong>: Authority to independently create, enforce, and update laws and regulations governing AI within a jurisdiction. This enables the development of AI systems aligned with local needs and protects against external laws or commercial interests overshadowing domestic priorities.</li><li><strong>Resilience and Risk Management</strong>: Ability to independently anticipate, withstand, and recover from AI-related disruptions or crises. This minimises downtime and economic losses while preserving continuity of critical services such as national security, healthcare, or transportation systems.</li><li><strong>Security and Cryptographic Sovereignty</strong>: Full control over encryption standards, security protocols, and cyber defences related to AI operations. This protects sensitive data and systems from foreign surveillance or unauthorised access and ensures local autonomy in deciding the strength and scope of cryptographic measures.</li><li><strong>Diplomatic Framework</strong>: The ability and framework to engage in international cooperation and negotiation on AI matters without compromising domestic interests. This facilitates constructive partnerships, joint research, and global data-sharing where beneficial while preventing poorly negotiated deals.</li><li><strong>Foundational Rights and Ethical Principles</strong>: Moral and legal guidelines shaping AI&apos;s impact on society, such as cultural values, privacy concepts, and justice frameworks. This ensures AI does not conflict with local norms and builds public trust by prioritising fairness, dignity, and consent.</li></ul>
<h2 id="layer-2resource-and-technical">Layer 2 - Resource and Technical</h2>
<p>Securing physical and technical independence:</p>
<ul><li><strong>Data Sovereignty</strong>: Control over how, where, and by whom data is collected, stored, processed, and shared. This protects sensitive or strategic data from external exploitation and reinforces privacy and compliance with local data protection laws.</li><li><strong>Supply Chain Independence</strong>: Capacity to source and produce critical AI components&#x2014;hardware, software, and talent&#x2014;within local or trusted networks. This reduces vulnerabilities tied to foreign suppliers who might withhold technology under political or commercial pressure.</li><li><strong>Energy Independence</strong>: Assurance that AI systems can be powered reliably by local or sufficiently diversified energy sources. This is crucial as AI computations are energy-intensive, and stable power is vital for real-time processing and operations.</li><li><strong>Infrastructure Sovereignty</strong>: Ownership and control over the physical (data centres, cables) and virtual (cloud, networks) infrastructure powering AI. This ensures high reliability and security within local borders and avoids external control points.</li><li><strong>Resource Allocation Sovereignty</strong>: Freedom to decide how funding, compute power, and materials are distributed among various AI projects. This enables prioritisation of national or organisational interests and guarantees critical initiatives aren&apos;t sidelined.</li><li><strong>Technical Standards Control</strong>: Authority to set or choose the protocols and frameworks governing AI system interfaces, data formats, and safety requirements. This encourages interoperability aligned with local needs rather than foreign-led mandates.</li><li><strong>Innovation and R&amp;D Control</strong>: Ability to shape and direct research agendas and development efforts to serve local strategic priorities. This focuses scientific and technological progress on challenges most relevant to local industry, defence, or social initiatives.</li></ul>
<h2 id="layer-3operational">Layer 3 - Operational</h2>
<p>Managing day-to-day AI operations:</p>
<ul><li><strong>AI Lifecycle Control</strong>: Oversight of each stage of an AI system&apos;s life: design, development, deployment, operation, maintenance, and eventual retirement. This prevents external parties from introducing hidden dependencies or controlling crucial updates.</li><li><strong>Workforce Sovereignty</strong>: Building and maintaining a skilled local AI workforce capable of working across the value chain. This strengthens domestic capabilities, reduces dependency on foreign experts, and cultivates talent that understands local contexts.</li><li><strong>AI System Autonomy</strong>: Determining how much of an AI system&apos;s decision-making processes are automated versus guided or overseen by humans. This balances automation to avoid unintended consequences while ensuring critical decisions remain subject to human judgement.</li><li><strong>Process Independence</strong>: Freedom to design AI workflows, development pipelines, and governance structures internally. This avoids external mandates that might conflict with local organisational culture and facilitates faster adaptation to local needs.</li><li><strong>Implementation and Interoperability Frameworks</strong>: Clear rules for how AI systems integrate, communicate, and scale within an organisation or ecosystem. This prevents fragmentation and locked-in scenarios where certain solutions cannot communicate with others.</li><li><strong>Knowledge Sovereignty</strong>: Preserving local expertise, research findings, and intellectual property within borders or trusted networks. This mitigates &quot;brain drain&quot; and protects against the loss of critical know-how to foreign competitors.</li><li><strong>Temporal Autonomy</strong>: Ability to decide the timing and pace of AI deployments, upgrades, or decommissions. This avoids premature rollouts or forced adoption schedules dictated by outside factors.</li><li><strong>Human Override Capabilities</strong>: Safety valves or manual controls allowing people to intervene if AI behaves unexpectedly or dangerously. This maintains a check on potential AI malfunctions, bias, or unethical decisions.</li></ul>
<h2 id="layer-4governance-and-control">Layer 4 - Governance and Control</h2>
<p>Directing strategic AI development:</p>
<ul><li><strong>Policy and Decision Sovereignty</strong>: Autonomy in setting AI strategies, regulations, and overarching goals at the policy level. This aligns AI initiatives with national priorities and avoids external influences that might skew decision-making away from local values.</li><li><strong>Financial Sovereignty</strong>: Control over how AI-related funding, investments, and capital flows are managed locally. This ensures critical projects can be adequately financed without foreign strings attached.</li><li><strong>Crisis Governance Frameworks</strong>: Formal procedures to handle AI emergencies or large-scale system failures. This ensures swift, coordinated responses, minimising damage and restoring normalcy quickly.</li><li><strong>International Governance Frameworks</strong>: Structured rules for cooperation with other nations and global entities on AI matters. This enables beneficial partnerships and shared innovations in a clear, legally supported manner.</li><li><strong>Data Governance Frameworks</strong>: Policies governing data lifecycle activities&#x2014;collection, usage, sharing, retention&#x2014;under a consistent set of rules. This protects individual rights and upholds public trust in data handling.</li><li><strong>Ethical Governance Frameworks</strong>: Practical guidelines for ensuring moral and cultural principles are integrated into AI design and deployment. This encourages accountability, fairness, and respect for societal norms throughout AI&apos;s lifecycle.</li><li><strong>Risk Management and Compliance</strong>: Ongoing processes to identify, assess, and mitigate legal, ethical, and operational risks in AI. This reduces liability, financial losses, and reputational damage stemming from AI missteps.</li><li><strong>Market Control</strong>: Mechanisms to foster fair competition, regulate monopolies, and set guidelines for private-sector AI offerings. This prevents market capture by large foreign or domestic players that may drive innovation in harmful directions.</li><li><strong>Performance Management</strong>: Measuring and evaluating AI systems to ensure they meet predefined targets for accuracy, speed, and fairness. This identifies areas needing improvement and highlights whether AI investments deliver intended benefits.</li></ul>
<h2 id="layer-5strategic">Layer 5 - Strategic</h2>
<p>Enabling international engagement while preserving autonomy:</p>
<ul><li><strong>Federated AI Sovereignty</strong>: Collaborative AI initiatives where each party retains ownership over key components or insights. This facilitates knowledge-sharing and resource pooling without relinquishing critical control over sensitive technology or data.</li><li><strong>Cultural Sovereignty</strong>: Ensuring AI systems respect and incorporate local customs, languages, and cultural expressions. This preserves national identity and social fabric, preventing cultural erosion and enhancing AI&apos;s acceptance across diverse groups.</li><li><strong>Ethical and Moral Alignment</strong>: Embedding deeply held ethical standards&#x2014;beyond baseline legal requirements&#x2014;into AI designs and decisions. This helps ensure AI operates within the moral compass of the society it serves and avoids conflicts with local norms.</li><li><strong>International Alignment and Cooperation</strong>: Engaging with global AI communities for mutual benefit under carefully negotiated terms. This provides access to cutting-edge developments, talent, and shared learning while preventing isolation that can stunt innovation.</li><li><strong>Linguistic Sovereignty</strong>: Commitment to develop and support AI technologies in local languages and dialects. This ensures equitable access to AI-powered tools, especially in multilingual societies, and preserves linguistic heritage.</li></ul>
<p>The framework acknowledges that absolute sovereignty may not be necessary or achievable in every domain. Instead, it helps organisations make strategic decisions about where and how to establish sovereign control over their AI capabilities, allowing for flexible implementation based on specific needs and contexts.</p>]]></content:encoded></item><item><title><![CDATA[On AI  Embassies]]></title><description><![CDATA[<h3 id="ai-embassies-securing-sovereign-ai-infrastructure">AI Embassies: Securing Sovereign AI Infrastructure</h3>
<p>As governments increasingly rely on artificial intelligence (AI) for national security, public services, and economic management,&#xA0;protecting AI systems&#xA0;has become an urgent priority. The compromise of critical AI systems could have cascading effects across defense, healthcare, and economic stability. While &quot;</p>]]></description><link>https://abutler.com/on-ai-embassies/</link><guid isPermaLink="false">67a449f02ed91304dc752805</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sat, 08 Feb 2025 13:52:18 GMT</pubDate><content:encoded><![CDATA[<h3 id="ai-embassies-securing-sovereign-ai-infrastructure">AI Embassies: Securing Sovereign AI Infrastructure</h3>
<p>As governments increasingly rely on artificial intelligence (AI) for national security, public services, and economic management,&#xA0;protecting AI systems&#xA0;has become an urgent priority. The compromise of critical AI systems could have cascading effects across defense, healthcare, and economic stability. While &quot;sovereign clouds&quot; offer some protection through data residency and operational controls, they expose governments to&#xA0;foreign jurisdictional risks, supply chain vulnerabilities, and potential access demands from host nations. Even stringent contractual protections can crumble under national security directives or regulatory changes.</p>
<h3 id="the-evolution-from-data-embassies-to-ai-embassies">The Evolution from Data Embassies to AI Embassies</h3>
<p>Data embassies&#xA0;provide a useful precedent in addressing digital sovereignty challenges. <a href="https://e-estonia.com/solutions/e-governance/data-embassy/?ref=abutler.com" rel="noreferrer">Estonia</a>, for example, has secured&#xA0;diplomatic protections for digital assets&#xA0;by establishing secure data havens in friendly jurisdictions. However, these facilities primarily serve as&#xA0;static data backups&#xA0;and lack the operational complexity required for modern AI systems.</p>
<p>Unlike traditional data storage, AI systems require&#xA0;continuous training, real-time inferencing, and constant updates while maintaining absolute governmental control. Simply replicating databases to overseas locations is insufficient to protect live AI workloads that process sensitive intelligence, manage critical infrastructure, or guide economic policy.</p>
<h3 id="the-ai-embassy-concept-a-new-approach-to-sovereign-ai">The AI Embassy Concept: A New Approach to Sovereign AI</h3>
<p>An&#xA0;AI embassy&#xA0;extends diplomatic immunity to AI operations, ensuring that&#xA0;sovereign AI workloads remain under home-nation control, regardless of physical location. Unlike traditional embassies, which safeguard physical spaces and data storage, AI embassies must function as&#xA0;active processing hubs&#xA0;that combine diplomatic immunity with&#xA0;advanced cryptographic security.</p>
<h2 id="legal-foundations-for-ai-embassies">Legal Foundations for AI Embassies</h2>
<p>The AI embassy framework builds upon the <a href="https://legal.un.org/ilc/texts/instruments/english/conventions/9_1_1961.pdf?ref=abutler.com" rel="noreferrer">Vienna Convention on Diplomatic Relations (1961)</a>, extending established protections to digital infrastructure. Estonia&apos;s data embassy in Luxembourg demonstrates this approach&apos;s viability: through carefully crafted bilateral agreements, nations can maintain complete jurisdiction over their digital assets, with specific provisions governing hardware access, maintenance protocols, and emergency procedures.</p>
<p>This legal foundation creates a sovereign computational space operating under home nation jurisdiction, regardless of physical location. The framework shields AI computations from foreign intelligence laws, subpoenas, and data requests, while ensuring comprehensive security for all data transmission channels.</p>
<p>While diplomatic frameworks provide the legal foundation, their practical implementation demands sophisticated technical protections. Modern cryptographic techniques offer the tools to transform legal guarantees into operational reality, creating verifiable barriers that enforce sovereign boundaries in the digital realm.</p>
<h2 id="cryptographic-foundations-for-an-ai-embassy">Cryptographic Foundations for an AI Embassy</h2>
<p>Legal protections establish the framework, but governments must implement advanced cryptographic safeguards to ensure AI embassies remain secure, tamper-proof, and resilient on foreign infrastructure.</p>
<p><strong>Trusted Execution Environments (TEEs) </strong></p>
<p>Hardware-based TEEs provide isolated processing environments for AI workloads, protecting both data and computations through specialized processors and memory encryption. For AI embassies processing citizen data, TEEs create secure enclaves where sensitive model inference occurs, remaining protected even if the host system is compromised.</p>
<p><strong>Verifiable Computation </strong></p>
<p>Building on this secure foundation, verifiable computation methods enable transparent validation of model execution. Through zero-knowledge proofs, governments can verify their AI models execute correctly without revealing sensitive details. For example, this could prove particularly valuable for immigration risk assessment models, where countries can verify proper processing while protecting algorithmic details.</p>
<p><strong>Secure Multi-Party Computation (MPC)</strong></p>
<p><a href="https://en.wikipedia.org/wiki/Secure_multi-party_computation?ref=abutler.com" rel="noreferrer">MPC</a> extends these protections to collaborative scenarios, enabling multiple parties to train AI models without revealing raw data.  For example, financial intelligence units could leverage this capability&apos;s power, training joint anti-money laundering models while each unit retains only encrypted shares of suspicious transaction data.</p>
<p><strong>Homomorphic Encryption (HE)</strong></p>
<p>As AI systems scale, they often require external compute for inference. <a href="https://research.ibm.com/topics/fully-homomorphic-encryption?ref=abutler.com" rel="noreferrer">Fully Homomorphic Encryption</a> enables AI models to process encrypted data without exposure, crucial for applications like public health surveillance where population health trends must be analyzed while maintaining strict privacy.  </p>
<p><strong>Zero-Knowledge Proofs (ZKPs)</strong></p>
<p>Completing the security architecture, ZKPs enable verification of regulatory compliance without exposing operational details. For example, this allows AI embassies to prove adherence to diplomatic agreements while protecting sensitive capabilities.</p>
<h2 id="democratising-ai-through-embassy-networks">Democratising AI Through Embassy Networks </h2>
<p>The technical evolution of cloud infrastructure creates new economic opportunities while reshaping existing business models. The AI embassy concept offers nations with limited domestic compute infrastructure a pathway to participate in advanced AI development through carefully structured diplomatic arrangements.</p>
<p>These arrangements enable countries to establish AI processing capabilities in technology-rich regions while maintaining autonomous control. Resource-sharing agreements might see host nations providing compute access in exchange for model insights, while multi-party compute arrangements allow smaller nations to pool resources while preserving operational independence.</p>
<h2 id="cloud-infrastructure-evolution-for-ai-embassies">Cloud Infrastructure Evolution for AI Embassies </h2>
<p>The emergence of AI embassies demands fundamental transformation in cloud service delivery. Providers must implement comprehensive cryptographic capabilities integrated at every level - from hardware isolation and secure enclaves to customer-controlled key management and verifiable operations. This includes:</p>
<ul><li>Dedicated hardware isolation exceeding current bare metal offerings</li><li>Specialized secure enclaves with sovereign routing controls</li><li>Truly customer-controlled key management systems (which is often a challenge today)</li><li>Enhanced audit trails with cryptographic proof of operations</li><li>Staff with appropriate diplomatic and/or security clearance</li><li>Modified support procedures respecting diplomatic protocols</li></ul>
<h2 id="next-steps-for-policymakers-governments"><strong>Next Steps for Policymakers &amp; Governments</strong></h2>
<p>As cloud providers evolve to meet these technical demands, governments must establish clear frameworks to guide this transformation. Policy makers face the crucial task of translating technical capabilities into practical governance structures that ensure genuine sovereign control.</p>
<p>Nations should initiate bilateral agreements to establish AI embassies in secure jurisdictions, drawing lessons from successful data embassy implementations. These agreements should establish clear jurisdictional authority over AI computation while ensuring protection from foreign regulatory overreach.</p>
<p>Building on this legal framework, governments could then mandate that AI models handling national security, economic planning, and intelligence analysis operate exclusively within sovereign environments. This requires negotiating binding compute agreements with cloud providers that guarantee workload encryption and isolation from foreign access.</p>
<p>These technical standards must extend to security protocols, incorporating Trusted Execution Environments, Secure Multi-Party Computation, and Verifiable Computation. Particular attention must focus on control plane architecture, as this determines operational autonomy and true sovereignty.</p>
<p>The collaborative framework enables creation of multi-nation AI embassy networks, fostering joint development among trusted allies while maintaining individual technological independence. These networks can address global challenges like cybersecurity, financial intelligence, and disease prevention while preserving national sovereignty.</p>
<h2 id="the-future-of-ai-embassies-is-cryptographically-secure"><strong>The Future of AI Embassies Is Cryptographically Secure</strong></h2>
<p>The evolution of AI embassies depends on parallel advances in technology and diplomacy. Performance and scalability challenges require sustained research into specialized hardware accelerators and optimized cryptographic algorithms. Meanwhile, the diplomatic landscape must adapt through new treaties and multilateral agreements explicitly addressing AI operations across borders.</p>
<p>Allied nations will likely deepen collaboration through shared cryptographic frameworks and infrastructure, creating resilient networks of federated AI embassies. However, this cooperation must balance against individual sovereignty concerns as the international community adapts existing frameworks to address emerging threats, from malicious model manipulation to sophisticated data poisoning attacks.</p>
<p>Beyond mere data storage, AI embassies represent the convergence of cryptographic innovation and diplomatic principles. By combining technical sophistication with legal protections, governments can maintain control over their AI assets regardless of physical location, ensuring both security and sovereignty in an increasingly interconnected world.</p>]]></content:encoded></item><item><title><![CDATA[The Evolution of Interfaces: A Hybrid Future]]></title><description><![CDATA[<p>The history of user interfaces is one of constant evolution&#x2014;from the command-line interfaces of early computing to the graphical systems that democratised technology. Now, with the rise of AI-driven conversational interfaces like ChatGPT, we&apos;re seeing a new paradigm emerge. But does this shift represent progress, or</p>]]></description><link>https://abutler.com/the-evolution-of-interfaces-a-hybrid-future/</link><guid isPermaLink="false">678b7e802ed91304dc7527ba</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sat, 18 Jan 2025 11:38:02 GMT</pubDate><content:encoded><![CDATA[<p>The history of user interfaces is one of constant evolution&#x2014;from the command-line interfaces of early computing to the graphical systems that democratised technology. Now, with the rise of AI-driven conversational interfaces like ChatGPT, we&apos;re seeing a new paradigm emerge. But does this shift represent progress, or does it reveal the limitations of modern interaction design?</p>
<p>The resurgence of chat-based interfaces raises a fundamental question: Are we witnessing a leap forward in human-computer interaction, or merely revisiting a less efficient form of engagement? Graphical user interfaces (GUIs) revolutionised computing with their high bandwidth and visual immediacy, so can conversational interfaces ever truly compete?</p>
<h3 id="a-brief-history-of-interfaces">A Brief History of Interfaces</h3>
<p><strong>Command-Line Interfaces:</strong><br>In the early days of computing, user interaction relied on text-based commands. These systems were precise but required technical expertise. The &quot;green screens&quot; of mainframe terminals and systems like Microsoft DOS exemplify this era.</p>
<p><strong>Graphical User Interfaces:</strong><br>Building on innovations from Xerox PARC, Apple and Microsoft introduced GUIs that brought visual, spatial, and interactive elements to computing. These interfaces made technology accessible to the masses, transforming computers into everyday tools.</p>
<p><strong>Conversational Interfaces:</strong><br>Today, we are seeing the rise of text- and voice-based interfaces powered by AI. These promise more natural, human-like communication&#x2014;but their reliance on sequential information flow raises questions about their efficiency.</p>
<h3 id="claude-shannon%E2%80%99s-legacy-in-interface-design">Claude Shannon&#x2019;s Legacy in Interface Design</h3>
<p>Claude Shannon, the father of information theory, provided insights into how efficiently information can be transmitted over a channel. His principles highlight the strengths and weaknesses of different interface types:</p>
<ul><li><strong>Channel Capacity:</strong>&#xA0;GUIs maximize information transmission by leveraging parallel streams&#x2014;visual, spatial, and interactive. Menus, icons, and real-time feedback enable users to process vast amounts of data quickly. Chat interfaces, in contrast, are inherently sequential, transmitting text or speech linearly and creating bottlenecks.</li><li><strong>Entropy and Encoding:</strong>&#xA0;Shannon emphasized reducing uncertainty (entropy) in communication. GUIs minimize ambiguity with predefined options like buttons and dropdowns. Conversational interfaces, however, must interpret free-form text, increasing the risk of errors and inefficiencies.</li></ul>
<p>For conversational systems to compete with GUIs, they must increase their effective bandwidth. Integrating text input with visual feedback could preserve conversational naturalness while reducing cognitive load.</p>
<hr>
<h3 id="the-psychology-of-interaction-why-guis-dominate">The Psychology of Interaction: Why GUIs Dominate</h3>
<p>Several principles of human psychology explain why GUIs remain the primary interface for most tasks:</p>
<ol><li><strong>George A. Miller&#x2019;s &quot;Magical Number Seven&quot;:</strong><br>Humans can hold only 7 &#xB1; 2 chunks of information in working memory. GUIs spatially distribute information, reducing reliance on memory. Conversational interfaces, with their sequential exchanges, can overwhelm users with lengthy responses.</li><li><strong>Hick&#x2019;s Law:</strong><br>Decision-making time increases logarithmically with the number of choices. GUIs excel by structuring choices hierarchically, speeding up decisions. Chat interfaces often present open-ended prompts, increasing cognitive effort.</li><li><strong>Fitts&#x2019;s Law:</strong><br>The time to reach a target depends on its size and distance. GUIs optimize frequent actions with large, accessible buttons, whereas chat interfaces require users to articulate commands, slowing interactions.</li></ol>
<p>To challenge GUI dominance, conversational interfaces must address these psychological inefficiencies.</p>
<h3 id="why-conversational-interfaces-are-resurging">Why Conversational Interfaces Are Resurging</h3>
<p>Despite their limitations, chat interfaces excel in areas where GUIs struggle:</p>
<ul><li><strong>Naturalness:</strong>&#xA0;They feel intuitive, requiring little to no training. Users can express complex ideas in their own words.</li><li><strong>Context Awareness:</strong>&#xA0;AI models like ChatGPT infer user intent and adapt responses dynamically.</li><li><strong>Accessibility:</strong>&#xA0;Conversational systems work across diverse devices and user groups, including those with disabilities or limited tech skills.</li></ul>
<p>However, to truly evolve, these systems must overcome the limitations of sequential information transfer, rethinking how humans and machines communicate.</p>
<hr>
<h3 id="the-future-toward-hybrid-interfaces">The Future: Toward Hybrid Interfaces</h3>
<p>The future of interfaces likely lies in&#xA0;<strong>hybrid systems</strong>&#xA0;that combine the strengths of GUIs and conversational interfaces:</p>
<ol><li><strong>Multimodal Interaction:</strong><br>Borrowing from Shannon&#x2019;s principles, hybrid systems can expand communication channels by integrating visual, auditory, and textual elements. For example, typing &#x201C;Find my recent emails&#x201D; could prompt the system to visually highlight the relevant messages in the email client GUI.</li><li><strong>Context-Aware AI:</strong><br>Inspired by Engelbart&#x2019;s vision of augmenting human intellect, conversational AI can proactively reduce input effort by anticipating user needs, aligning with Shannon&#x2019;s goal of minimising uncertainty.  This is, to some extent, the promise of agentic architectures.</li><li><strong>Adaptive Interfaces:</strong><br>Systems could dynamically switch between chat and GUI modes based on the task. High-bandwidth tasks (e.g., designing a presentation) might leverage GUIs, while exploratory tasks (e.g., brainstorming) could rely on conversation.  The interfaces should adapt dynamically and fluidly; without the user necessarily experiencing the jarring effect of shifting from a conversational window to a GUI.</li><li><strong>Parallelism Through AI:</strong><br>Conversational AI could automate repetitive tasks or summarise information visually while users focus on high-level dialogue.  This will require predictive capabilities and also for the co-pilot paradigm to be extended to embody for autonomy.  The work being done to develop acts that have the ability to reason and, through the interaction with tools, integrate with the outside world is holds significant promise in this regard.</li></ol>
<h3 id="blending-strengths-for-a-smarter-future">Blending Strengths for a Smarter Future</h3>
<p>The resurgence of chat interfaces is not a regression but a response to specific user needs for accessibility, natural interaction, and context-aware assistance. While GUIs remain dominant due to their alignment with human cognitive strengths, the future lies in blending these paradigms.</p>
<p>As Shannon taught us, effective communication is not just about capacity&#x2014;it&#x2019;s about encoding information to suit the channel. By leveraging multimodal interaction, predictive AI, and hybrid approaches to interface design, we can create intelligent systems that amplify human capabilities. As such, it&apos;s likely that the next era of human-computer interaction will not be about replacing GUIs with conversational interfaces, but harmonising the two into systems that are greater than the sum of their parts.</p>
<hr>
<h3 id="references">References</h3>
<ol><li>Shannon, C. E. (1948).&#xA0;<em>A Mathematical Theory of Communication</em>. Bell System Technical Journal, 27(3), 379&#x2013;423.</li><li>Miller, G. A. (1956).&#xA0;<em>The Magical Number Seven, Plus or Minus Two: Some Limits on Our Capacity for Processing Information</em>. Psychological Review, 63(2), 81&#x2013;97.</li><li>Hick, W. E. (1952).&#xA0;<em>On the Rate of Gain of Information</em>. Quarterly Journal of Experimental Psychology, 4(1), 11&#x2013;26.</li><li>Fitts, P. M. (1954).&#xA0;<em>The Information Capacity of the Human Motor System in Controlling the Amplitude of Movement</em>. Journal of Experimental Psychology, 47(6), 381&#x2013;391.</li><li>Engelbart, D. C. (1962).&#xA0;<em>Augmenting Human Intellect: A Conceptual Framework</em>. SRI International Report, Stanford Research Institute.</li><li>Nielsen, J. (1993).&#xA0;<em>Usability Engineering</em>. Academic Press.</li><li>Norman, D. A. (2013).&#xA0;<em>The Design of Everyday Things</em>. Basic Books.</li></ol>]]></content:encoded></item><item><title><![CDATA[Building National AI Sovereignty: A Strategic Framework]]></title><description><![CDATA[<p>&quot;Just deploy local data centers and you&apos;ll have AI sovereignty.&quot;</p>
<p>I hear this surprisingly often from senior leaders and from vendors arguing that local deployment is the same as sovereign deployment.  Unfortunately, this view isn&apos;t just oversimplified&#x2014;it&apos;s dangerously inadequate in</p>]]></description><link>https://abutler.com/the-five-pillars-of-sovereignty-in-the-ai-era/</link><guid isPermaLink="false">676e8c312ed91304dc75272c</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Fri, 27 Dec 2024 12:20:26 GMT</pubDate><content:encoded><![CDATA[<p>&quot;Just deploy local data centers and you&apos;ll have AI sovereignty.&quot;</p>
<p>I hear this surprisingly often from senior leaders and from vendors arguing that local deployment is the same as sovereign deployment.  Unfortunately, this view isn&apos;t just oversimplified&#x2014;it&apos;s dangerously inadequate in a world in which AI is becoming increasingly important to the functioning of our economies and societies.  </p>
<p>As AI becomes intrinsic to the operation of our healthcare systems, our financial markets and even national security, the question of AI sovereignty has never been more critical.  Yet many nations and organizations are still approaching it with an outdated infrastructure-focused mindset.</p>
<p>True AI sovereignty is far more nuanced and demanding. It requires a comprehensive approach across five critical dimensions:</p>
<h2 id="physical-independence-more-than-just-hardware">Physical Independence: More Than Just Hardware</h2>
<p>The foundation of AI sovereignty starts with, but goes far beyond, physical infrastructure. Yes, you need data centers&#x2014;but you also need:</p>
<ul><li>Sovereign high-performance computing clusters and AI accelerators.</li><li>Strategic control over specialized processor supply, such as the GPUs or ASICs that power bother training and inference.</li><li>Independent cloud platforms optimized for AI workloads that enable broad access to the computational capabilities needed.</li><li>Comprehensive security validation systems that enable even externally sourced models or hardware to be independently validated and proven.</li></ul>
<p>This means making tough choices about what to build domestically versus where to forge strategic partnerships that don&apos;t create dangerous dependencies.</p>
<h2 id="technological-freedom-controlling-your-ai-destiny">Technological Freedom: Controlling Your AI Destiny</h2>
<p>Infrastructure alone is meaningless without technological independence. This requires:</p>
<ul><li>The capability to train and customize your own foundation models, whether large language models or models trained on some specialized industry dataset.</li><li>World-class AI research institutions driving innovation and creating a pipeline of talent and intellectual property.</li><li>Complete control over your AI development lifecycle across all the steps in the value chain.</li><li>Independent ability to audit and secure AI systems.</li></ul>
<p>This isn&apos;t about reinventing every wheel&#x2014;it&apos;s about having the capacity to develop and control the technologies that matter most to your strategic interests.</p>
<h2 id="operational-authority-mastering-the-ai-lifecycle">Operational Authority: Mastering the AI Lifecycle</h2>
<p>Even with infrastructure and technology, you need the capability to operate AI systems independently. This means:</p>
<ul><li>Deep pools of domestic AI talent including data scientists, AI/ML researchers but also the people able to deploy at scale AI models and manage them through their lifecycle.</li><li>Sovereign AI deployment and optimization capabilities across the entire lifecycle of the model.</li><li>Control over data processing and model training (including fine-tuning and optimization).</li><li>Independent oversight systems that can assess both domestically and externally produced models and systems to ensure compliance with local requirements.</li></ul>
<p>The key here is building sustainable operational independence without isolating yourself from global talent and innovation.</p>
<h2 id="economic-control-securing-strategic-advantage">Economic Control: Securing Strategic Advantage</h2>
<p>AI sovereignty has profound economic dimensions. Nations need:</p>
<ul><li>Strong domestic AI companies in strategic sectors, particularly those related to critical infrastructure or segments. </li><li>Sovereignty over critical training data, particularly data that is created by citizens.</li><li>Clear frameworks for managing foreign AI deployment, such as how acquired data is managed for retraining purposes.</li><li>Strategic investment in critical capabilities, such as ensuring there is sufficient funding for startups and research in key AI domains.</li></ul>
<p>This doesn&apos;t necessarily mean protectionism but it means ensuring healthy competition and innovation.</p>
<h2 id="cultural-autonomy-preserving-identity-in-the-ai-age">Cultural Autonomy: Preserving Identity in the AI Age</h2>
<p>Perhaps most overlooked is the cultural dimension of AI sovereignty. This includes:</p>
<ul><li>Ensuring AI systems reflect national values and context.</li><li>Developing strong domestic language processing capabilities, particularly for local dialects and norms.</li><li>Managing AI&apos;s social impact, such as on language or culture.</li><li>Preserving digital cultural identity given globally dominant AI models can exert a homogenizing effect on culture that might not be appropriate.</li></ul>
<p>This isn&apos;t just about technology&#x2014;it&apos;s about maintaining authentic cultural expression in an AI-driven world.</p>
<h2 id="the-path-forward">The Path Forward</h2>
<p>The goal isn&apos;t digital isolation&#x2014;that&apos;s neither possible nor desirable in today&apos;s interconnected world. Instead, the objective is strategic autonomy: the ability to make independent choices about critical AI capabilities while participating fully in the global digital economy.</p>
<p>Success requires carefully balancing several factors:</p>
<ul><li>Identifying truly strategic capabilities that require sovereign control</li><li>Building domestic strength in critical areas</li><li>Fostering international partnerships that enhance rather than undermine sovereignty</li><li>Creating frameworks for beneficial collaboration while protecting core interests</li></ul>
<h2 id="what-leaders-should-do">What Leaders Should Do</h2>
<ol><li>Start by mapping your critical AI dependencies and vulnerabilities</li><li>Identify the capabilities that are truly strategic for your context</li><li>Develop a phased plan for building essential sovereign capabilities</li><li>Create frameworks for managing international collaboration</li><li>Build the governance systems needed to execute effectively</li></ol>
<p>The time for simplified approaches to AI sovereignty is over. Leaders need to embrace this complexity and build comprehensive strategies that ensure genuine independence in the AI era.</p>]]></content:encoded></item><item><title><![CDATA[A graph pathfinding approach to FX liquidity challenges]]></title><description><![CDATA[<p>In blockchain networks&#xA0;that are focused on enabling PvP cross-border payments,&#xA0;such as&#xA0;the BIS Innovation Hub&#x2019;s&#xA0;mBridge, addressing the FX&#xA0;conversion requirement is important. </p>
<p>There may be a challenge with these multi-currency payment networks around the liquidity of certain currency pairs.&#xA0;</p>]]></description><link>https://abutler.com/a-graph-pathfinding-approach-to-fx-liquidity-challenges/</link><guid isPermaLink="false">65db6b502ed91304dc7526bb</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sun, 25 Feb 2024 16:32:24 GMT</pubDate><content:encoded><![CDATA[<p>In blockchain networks&#xA0;that are focused on enabling PvP cross-border payments,&#xA0;such as&#xA0;the BIS Innovation Hub&#x2019;s&#xA0;mBridge, addressing the FX&#xA0;conversion requirement is important. </p>
<p>There may be a challenge with these multi-currency payment networks around the liquidity of certain currency pairs.&#xA0;&#xA0;This can, of course, be solved by using an external market maker or a commonly traded currency such as the USD as a vehicle currency &#x2013; but this may diminish one of the core value propositions of these types of networks.&#xA0;&#xA0;Further, we can conceive of a future where, with increasing amounts of non-cash assets being tokenized, a more diverse set of assets may be used to effect cross-border trades. </p>
<p>In thinking about possible approaches, a graph theoretic mechanism to identify optimal currency conversion paths, even in scenarios&#xA0;where direct conversions face liquidity challenges, could be useful.&#xA0;&#xA0;It would, in essence, find a set of paths through one or more conversions that, taken in aggregate, optimize for the lowest total cost of conversion whilst adhering to liquidity constraints in the network. </p>
<p>The idea would be as follows:</p>
<p><strong>Step 1: Build a Graph Representation:</strong></p>
<p>Build a graph representing the market:</p>
<p>1.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;Each currency (or asset) is a node on the graph. </p>
<p>2.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;Every edge is directional reflecting the conversion between two currencies (nodes). </p>
<p><strong>Step 2: Edge Weight Calculation:</strong></p>
<p>Each edge is assigned two distinct weights: </p>
<p>a)&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;The inverse of the FX rate, ensuring that stronger currencies are more favorable. This weight essentially captures the cost of conversion. </p>
<p>b)&#xA0;&#xA0;&#xA0;&#xA0;The available liquidity for that conversion. This weight indicates the maximum volume that can be converted directly between the two currencies without exhausting available resources.</p>
<p>c)&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;These both values could be constantly updated to reflect real-time pricing via an Oracle or some other mechanism.</p>
<p><strong>Step 3: Pathfinding with liquidity constraints</strong></p>
<p>Find all paths between the source and target currency nodes:</p>
<p>1.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;Using a variation of the <a href="https://www.geeksforgeeks.org/bellman-ford-algorithm-dp-23/?ref=abutler.com">Bellman Ford algorithm</a>, the system identifies potential conversion paths from a source currency to a destination currency.</p>
<p>2.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;The algorithm is tailored to not only look for the most cost-effective path but to also explore multiple viable paths. This multi-path exploration ensures redundancy and provides flexibility in addressing liquidity constraints.</p>
<p><strong>Step 4: Transaction allocation across multiple paths:</strong></p>
<p>1.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;After identifying multiple potential paths, they are ranked based on their associated conversion costs.</p>
<p>2.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;For each path, the &quot;bottleneck liquidity&quot; is noted &#x2014; this is the smallest liquidity value among all the edges in that path.</p>
<p>3.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;Starting with the most cost-effective path, the system checks if the desired transaction amount can be handled by the path&apos;s bottleneck liquidity. If the transaction amount exceeds this liquidity, the maximum possible amount is converted using this path, and the remaining amount is allocated to the next best path. This process continues iteratively, ensuring that liquidity constraints are always respected while achieving the most favorable conversion rate.</p>
<p><strong>Step 5:&#xA0;Execution:</strong></p>
<p>1.&#xA0;&#xA0;&#xA0;&#xA0;&#xA0;The conversion instructions, based on the final allocation across multiple paths, are executed. This ensures that each individual conversion stays within the liquidity constraints of its respective path, while collectively achieving the best possible rate for the entire transaction.</p>
<p>In essence, this approach ensures that conversions in the multicurrency blockchain network are executed at optimal rates while always respecting liquidity constraints. It provides a balanced and flexible solution, especially beneficial for currency pairs that are less commonly traded.</p>]]></content:encoded></item><item><title><![CDATA[Liquidity-saving mechanisms in trade credit networks: Optimising corporate liquidity]]></title><description><![CDATA[<div class="kg-card kg-file-card"><a class="kg-file-card-container" href="https://abutler.com/content/files/2023/10/Trade-Credit-Liquidity-Savings-Mechanism-vF.pdf" title="Download" download><div class="kg-file-card-contents"><div class="kg-file-card-title">Trade Credit Liquidity Savings Mechanism vF</div><div class="kg-file-card-caption"></div><div class="kg-file-card-metadata"><div class="kg-file-card-filename">Trade Credit Liquidity Savings Mechanism vF.pdf</div><div class="kg-file-card-filesize">1 MB</div></div></div><div class="kg-file-card-icon"><svg viewbox="0 0 24 24"><defs><style>.a{fill:none;stroke:currentColor;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5px;}</style></defs><title>download-circle</title><polyline class="a" points="8.25 14.25 12 18 15.75 14.25"/><line class="a" x1="12" y1="6.75" x2="12" y2="18"/><circle class="a" cx="12" cy="12" r="11.25"/></svg></div></a></div>
<p><br>Trade credit, or the delayed payment for intermediate goods, has been reported as an important source of short-term external finance for many</p>]]></description><link>https://abutler.com/untitled-11/</link><guid isPermaLink="false">653a58c62ed91304dc752642</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Fri, 27 Oct 2023 07:35:02 GMT</pubDate><content:encoded><![CDATA[<div class="kg-card kg-file-card"><a class="kg-file-card-container" href="https://abutler.com/content/files/2023/10/Trade-Credit-Liquidity-Savings-Mechanism-vF.pdf" title="Download" download><div class="kg-file-card-contents"><div class="kg-file-card-title">Trade Credit Liquidity Savings Mechanism vF</div><div class="kg-file-card-caption"></div><div class="kg-file-card-metadata"><div class="kg-file-card-filename">Trade Credit Liquidity Savings Mechanism vF.pdf</div><div class="kg-file-card-filesize">1 MB</div></div></div><div class="kg-file-card-icon"><svg viewbox="0 0 24 24"><defs><style>.a{fill:none;stroke:currentColor;stroke-linecap:round;stroke-linejoin:round;stroke-width:1.5px;}</style></defs><title>download-circle</title><polyline class="a" points="8.25 14.25 12 18 15.75 14.25"/><line class="a" x1="12" y1="6.75" x2="12" y2="18"/><circle class="a" cx="12" cy="12" r="11.25"/></svg></div></a></div>
<p><br>Trade credit, or the delayed payment for intermediate goods, has been reported as an important source of short-term external finance for many non-financial firms. The value of trade payables is comparable with that of outstanding corporate bonds and is about one-third of non-financial firms&#x2019; outstanding bank loans (Boissay, et al., 2020). In the United States, trade receivables represented approximately 8% of the assets of corporate balance sheets in 2022 (Federal Reserve System, 2023).&#xA0;</p>
<p>During financial crises, as bank credit weakens, trade credit becomes a substitute source of liquidity (see Ba&#xF1;os-Caballero, et al., 2023). According to the literature, firms able to access trade credit are better positioned to withstand financial crises. For instance, a study of over 200,000 European firms found that an increase in the availability of trade credit to a firm led to a significant decrease in the likelihood of distress (McGuinness, et al., 2018). Therefore, there is a potential benefit in supporting trade credit throughout the entire economic cycle, particularly during financial downturns.&#xA0;</p>
<p>By extending a short-term loan to buyers, sellers of goods and services provide liquidity, facilitate the purchase of supplies by other firms, encourage long-term customer relationships, and increase demand. As firms recursively borrow from their suppliers and lend to their customers through the supply chain, trade credit networks foster economic activity. Accordingly, trade credit has been reported to be a key element in enabling economic activity and ensuring financial stability.&#xA0;&#xA0;</p>
<p>However, this positive feedback loop created by trade credit networks also works in the opposite direction, with the potential to create instability in the economy. For instance, if some firms do not pay on time, others may find it difficult to pay on time, and a cascading effect of higher payment terms may ensue; furthermore, when firms cannot pay, the cascading effect may be worse. That is, in adverse scenarios, the trade credit channel that runs parallel to input-output linkages could negatively affect the liquidity and the solvency of firms, and, in turn, economic activity and stability (see Costello, 2020).&#xA0;</p>
<p>This type of network and feedback effect is well-known in interbank markets. When banks provide liquidity to each other in the money market, the inability of a single bank to pay on time may threaten the safe and efficient functioning of the payment system and, eventually, the solvency of the financial system. Large-value payment systems have long acknowledged that interbank liquidity is a network problem that is better tackled by implementing intraday Liquidity-Saving Mechanisms (LSMs), i.e., a suite of algorithms designed to compress liquidity requirements to facilitate smoother flows of liquidity. By introducing LSMs into real-time gross settlement systems, large-value payment systems have mitigated liquidity and counterparty risk.&#xA0;&#xA0;&#xA0;&#xA0;</p>
<p>We suggest a similar approach to mitigate liquidity and counterparty risks in trade credit networks. By introducing a new Financial Market Infrastructure (FMI) that runs LSMs in trade credit networks, we can reduce the outstanding exposures among firms, reduce the payment terms, and mitigate potential risks arising from undesirable network and feedback loop effects. This way, by implementing LSMs, risks and potential amplification effects from trade credit exposures are mitigated while their potential contribution to firms&#x2019; growth, supply chain resilience, and economic activity is preserved. Besides, as this implementation of LSMs requires observing the trade credit network, new data for monitoring and policy-making is available for central banks and financial authorities. </p>]]></content:encoded></item><item><title><![CDATA[IP considerations in consulting/services agreements]]></title><description><![CDATA[<p>As a result of Vision 2030, there is a tremendous amount of innovation occurring across every dimension of Saudi Arabia: from the digital transformation of government, the development of the gigaprojects (such as NEOM), or the vast efforts of the Public Investment Fund to accelerate the growth of the non-oil</p>]]></description><link>https://abutler.com/ip-considerations-in-consulting-services-agreements/</link><guid isPermaLink="false">65105cd42ed91304dc752630</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sun, 24 Sep 2023 15:59:38 GMT</pubDate><content:encoded><![CDATA[<p>As a result of Vision 2030, there is a tremendous amount of innovation occurring across every dimension of Saudi Arabia: from the digital transformation of government, the development of the gigaprojects (such as NEOM), or the vast efforts of the Public Investment Fund to accelerate the growth of the non-oil sector as a driver of Saudi GDP growth. In many cases, this innovation is leading to the invention of new ideas, new approaches, and new technologies or solutions; whilst this is delivering immediate value in its original context, these innovations also have potential to deliver global value and become an important contributor to future growth and competitive advantage for the Kingdom.</p>
<p>As such, a key consideration must be intellectual property: ensuring that the knowledge that is being created every day across these dimensions is captured in a form that it can be utilised. In some cases, this might be to create a new public good or make the innovation available more broadly to the society; and, in other cases, it might be to monetise the intellectual property through the creation of new companies, products, or services. Indeed, the Saudi Arabian IP Authority (SAIP) lays out a&#xA0;<a href="https://www.saip.gov.sa/en/national-strategy/?ref=abutler.com">detailed national strategy</a>&#xA0;for IP in the country that serves to underscore the importance of IP to economic growth.</p>
<p>Having negotiated and led many complex contracts over the years (albeit from the vendor side of the table!), I was recently asked to share some thoughts or advice on what companies and/or agencies should consider when entering into a contract with a consultancy or services firm to ensure that any IP that is created is appropriately captured.&#xA0; This applies whether the intent is to monetise it or whether the intent is to ensure that other Saudi firms and entities can fully benefit from it.</p>
<ol><li>The vendor/consultancy will bring what is termed &#x201C;background IP&#x201D;.&#xA0; This is pre-existing intellectual property that they bring to a contractual relationship, such as assets that might be used in a software development project or tools that might be used in a paper-based consultancy engagement.&#xA0; The consultant should explicitly disclose what constitutes background IP in a given engagement -- ideally in the contract. &#xA0;&#xA0;A &#x201C;continuous disclosure&#x201D; requirement should also be considered wherein, as a consultancy engagement progresses, the consultant is notifying the client of background IP that will be leveraged.</li><li>There will be IP that is created through the engagement that may or may not build on or extend the background IP.&#xA0; This is the &#x201C;foreground IP&#x201D;. One can think of this as the &quot;deliverable&quot; but also, as part of a deliverable, there may be inventions that arise which might not necessarily be the core work product but a consequence of it.</li><li>The entity should ensure that the consultant assigns them a complete, irrevocable, worldwide, exclusive, and royalty-free assignment of all rights and interest in any foreground IP that was developed by the consultant for the client.&#xA0;</li><li>Often, consultants will include a &#x201C;license back&#x201D; clause where they license back the foreground IP they develop for their client.&#xA0; In other words, they would be granted a fully irrevocable worldwide license to do as they wish with it. This may seem harmless or may not even be noticed but it can materially weaken a client&#x2019;s ability to monetise the IP exclusively or introduce other issues such as if this IP is reused/sold to competitors.&#xA0; As such, these clauses should be considered carefully in the context of the client&#x2019;s broader business goals.&#xA0;</li><li>There may be background IP embedded in the deliverable so consideration should also be given to ensure the unencumbered use of this.&#xA0; For example, the use of proprietary tools, technologies, assets, or libraries may be used by a consultant to accelerate delivery or achieve other benefits.&#xA0; Rights to continue to use, modify, and even assign to others should be considered based on the business requirements; as should careful examination of whether, as a result of continuing to use background IP after the departure of a consultant, there is a requirement to pay royalties and/or use services.</li><li>The client should ensure that any third-party IP embedded within the consultant&apos;s background IP, or otherwise utilized in the project, is free of liens, encumbrances, or restrictions. This ensures the IP can be used, extended, and monetized in line with business objectives.</li><li>Additionally, the consultant should indemnify the client against any third-party claims resulting from future IP infringements. This could be in relation to the work that the consultant themselves produce or it could be in relation to subcontracted or embedded components in their deliverables. &#xA0;</li><li>It may be useful to incorporate provisions that requires the consultant to both assist in the recording and registration of the IP rights and/or to provide assistance in the future to confirm the exclusive ownership of the IP if required. For example, if there is a need to file patent disclosures, then the consultant will provide input into this documentation.</li><li>The consultant should be obligated to notify the client in case they become aware of any IP infringement claim or potential conflict related to the deliverables.&#xA0; This should not be time-limited to the contract duration.</li><li>If the client wants to maximise optionality for the use of the IP, the contract should include a sub-licensing clause that gives the client the right to sublicense or assign any foreground and/or background IP to third parties.</li><li>The term &#x201C;residuals&#x201D; pertains to knowledge and insights that consultants retain from the project (in their memories).&#xA0; Their knowledge, skills, and experience can be applied in ways that might not be acceptable and therefore an agreement could introduce limitations to prevent the consultant form applying or exploiting this knowledge in specific fields, domains, or regions.&#xA0;</li><li>This can be strengthened through the use of confidentiality and non-compete agreements, if justified, where the individual consultant&#x2019;s engagements with competitors or others, such as customers and employees, can be contractually restricted.&#xA0;</li><li>As consultants may inadvertently establish &#x201C;prior art&#x201D; in the public domain by describing aspects of their work or ideas that may have arisen from their work, this can limit the ability of the client to register certain forms of IP rights, such as patents.&#xA0; Consideration should be given in the contract to protecting against inadvertent leakage of information through public statements or publications &#x2013; even if obfuscated or not directly attributable to the specific engagement.&#xA0;</li><li>Some consultants may use subject matter experts that are contracted from outside their firm or they may use subcontractors.&#xA0; The agreements should require that the IP requirements cascade down to these firms and individuals.</li><li>When the consultancy involves the use of proprietary software or specialized tools constituting background IP, clients may consider entering into an escrow agreement. This would involve depositing the software&apos;s source code with a neutral third party. In events such as the consultant&apos;s insolvency or failure to provide necessary updates/support, the client can access the source code, ensuring continuity and beneficial ownership of the IP. The trigger events for access to escrowed source code should be wide enough to reflect the full spectrum of risks that might prevent future beneficial use of the software. This would likely extend beyond just financial insolvency matters but also sanctions or boycotts.</li><li>In the case of a breach of contract and termination, consideration should be given to the reversion of rights. &#xA0;For example, in the case of termination for convenience or cause, the IP rights would be with the client. This reversion could be full or partial; but should nonetheless be considered particularly in the case of software development or phased consultancy projects. It is also important to consider that, depending on the methodology used, the discovery of novel ideas or inventions may happen early in the project (well before anything is actually built).</li><li>Sometimes consultants may attempt to limit to copyright whilst not transferring the broader set of IP rights to the client, such as patents.&#xA0; For example, a consultant could do work for a client, discover some novel solution to a problem, and, by only transferring copyright on the actual deliverable, retain the rights to create a product based on the invention and monetise it independently.</li><li>The use of certain open-source components may introduce complexity if the licensing requirements of those components requires, for example, derivative works to also be open sourced.&#xA0; &#xA0;</li><li>There may be geographic constraints placed on an agreement such as agreeing to exclusivity within a country or region.&#xA0; This needs to be considered in the context of the business objectives.&#xA0; Similarly, some agreements may be time-bound and not perpetual so this also requires similar consideration because of the limitations it can place on the usage of the IP.</li><li>With the growing use of generative AI, it is possible that data provided by a client will be used directly or indirectly to train/enhance/enable AI models that are used by a consultant to deliver a service.&#xA0; This needs to be carefully considered.</li></ol>]]></content:encoded></item><item><title><![CDATA[Designing a privacy preserving rCBDC]]></title><description><![CDATA[<p>There has been a lot of global debate about privacy in the context of retail CBDCs, particularly in the context of them being a replacement for cash. The perception is that the introduction of a retail CBDC would enable broad surveillance powers for the government as well as the ability</p>]]></description><link>https://abutler.com/designing-a-privacy-preserving-rcbdc/</link><guid isPermaLink="false">64f3103a2ed91304dc752624</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sat, 02 Sep 2023 10:36:53 GMT</pubDate><content:encoded><![CDATA[<p>There has been a lot of global debate about privacy in the context of retail CBDCs, particularly in the context of them being a replacement for cash. The perception is that the introduction of a retail CBDC would enable broad surveillance powers for the government as well as the ability to programmatically control what people do with their money.&#xA0; Indeed, this is possible however it is also possible to design a rCBDC in a way that doesn&apos;t reduce people&apos;s privacy or the ability to conclude transactions in an anonymous way.</p>
<p>In 1983, cryptographer David Chaum invented the&#xA0;<a href="http://www.hit.bme.hu/~buttyan/courses/BMEVIHIM219/2009/Chaum.BlindSigForPayment.1982.PDF?ref=abutler.com">&#x201C;blind signature&#x201D;</a>: a method of digitally signing a message that is &#x201C;blinded&#x201D; to the signatory.&#xA0; This relatively old technique can be applied, in the context of rCBDCs, to deliver a privacy experience that resembles cash in many ways.&#xA0;</p>
<p>A blinded signature is where Alice, for example, creates a message M and sends that Bob with the bank&#x2019;s signature but doesn&#x2019;t want Bob to know the contents.&#xA0;Alice takes her message and multiplies it by a blinding factor and then sends the blinded message to the bank.&#xA0;The bank generates a signature for the blinded message and sends back to Alice.&#xA0;&#xA0;Alice then removes the blinded bank signature to Bob who can then use the bank&#x2019;s public key to verify that they signed it &#x2013; without the bank having seen the contents of her message.</p>
<p>Now, this can be applied to design a rCBDC that offers cash-like privacy characteristics.&#xA0;In this case, Alice would create, on her mobile device, a number of rCBDC tokens (in different denominations similar to cash) and blinds them before sending to their commercial bank.&#xA0;The commercial bank would authenticate the request, perhaps calculate fees, and debit Alice&#x2019;s account for the total amount. &#xA0;They then pass the still-blinded message to the Central Bank who debits the commercial bank account, digitally signs the blinded rCBDC coins, and sends back to the Commercial Bank.&#xA0; As they are blinded, the Central Bank doesn&#x2019;t know who they were issued to but they record a serial number of the issued coins in a database based on their signature of them.&#xA0;The commercial back then sends the blinded signed coins back to Alice who unblinds them whilst retaining the central bank signature (which demonstrates their authenticity).&#xA0;</p>
<p>When Alice wants to spend the rCBDC with Bob, she presents the coins to him via a secure peer to peer exchange or similar.&#xA0;Bob&#x2019;s application or device then sends the tokens electronically to his acquiring bank for validation who then sends them to the Central Bank.&#xA0;The Central Bank checks the signature and looks up in a database (to ensure no double spend).&#xA0;If all in order, they credit the acquiring Commercial Bank account, mark the coins as spent, and notify the bank who would subsequently credit Bob&#x2019;s merchant account.&#xA0;</p>
<p>This is analogous, in many ways, to how cash works with ATM machines.&#xA0;Although the rCBDC was signed when Alice withdrew it, when she then presents it for use in a payment transaction, there is no way for the particular tokens to be linked to Alice&#x2019;s withdraw because they were blinded when the Central Bank signed them.</p>
<p>As the end user (Alice) is engaging with the rCBDC through their commercial bank, there is still the ability to ensure KYC and AML steps are performed by this bank; and the bank would have an aggregate view of how much rCBDC has been issued to a user but not necessarily what they have used the currency for.</p>
<p>At the point of acquisition, if there was a requirement to differentiate between small value and large value transactions to mitigate increasing AML risks, it would also be possible. For example, requiring that the transaction is linked to Alice&apos;s identity in the case of a large value transaction prior to passing the rCBDC coins to the Central Bank for verification.</p>]]></content:encoded></item><item><title><![CDATA[Decentralized Liquidity Savings Mechanisms with Privacy-Preserving Cryptography]]></title><description><![CDATA[<p>In the context of the wCBDC discussion, there is a lot of focus on the standard lifecycle of the CBDC from issuance through to redemption but not as much consideration of the possible liquidity implications.</p>
<p>One of the most powerful concepts in financial markets is the concept of the&#xA0;</p>]]></description><link>https://abutler.com/decentralized-liquidity-savings-mechanisms-with-privacy-preserving-cryptography/</link><guid isPermaLink="false">64f310012ed91304dc75261a</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sat, 02 Sep 2023 10:36:02 GMT</pubDate><content:encoded><![CDATA[<p>In the context of the wCBDC discussion, there is a lot of focus on the standard lifecycle of the CBDC from issuance through to redemption but not as much consideration of the possible liquidity implications.</p>
<p>One of the most powerful concepts in financial markets is the concept of the&#xA0;<a href="https://www.bankofengland.co.uk/-/media/boe/files/payments/liquidity-saving-mechanism-user-guide.pdf?ref=abutler.com">Liquidity Savings Mechanism (LSM)</a>. The Bank of England describes</p>
<figure class="kg-card kg-image-card"><img src="https://media.licdn.com/dms/image/D4E12AQGPUYZWE_UmYQ/article-inline_image-shrink_1500_2232/0/1693565781992?e=1698883200&amp;v=beta&amp;t=dyzMlfIMAiKdHLOSh4IMznhgG3Yxam9SiyGDO-xEsX0" class="kg-image" alt loading="lazy"></figure>
<p>In essence, rather than settle all transactions between themselves on a gross basis, banks can use an LSM to match and offset payments (thus mitigating the need for any actual transfer of funds). This allows financial institutions to deploy funds elsewhere and, by doing so,&#xA0;<a href="https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3581600&amp;ref=abutler.com">create economic benefits and address risk</a>. Even small increases in LSM efficiency can lead to significant benefits hence this remains an area of important research and innovation. For example, some&#xA0;<a href="https://www.bankofcanada.ca/wp-content/uploads/2022/12/swp2022-53.pdf?ref=abutler.com">fascinating work with Payments Canada</a>&#xA0;explored even applications of quantum computing-based algorithms to liquidity optimisation (which they estimated could deliver approximately $240m of daily liquidity saving).</p>
<p>There are many examples but, to give a sense for the scale of the benefit, CHIPS&#xA0;<a href="https://www.theclearinghouse.org/payment-systems/Articles/2023/02/02-01-2023_CHIPS_FNA_Liquidity_Report?ref=abutler.com">settle approximately $2 trillion in payments every day</a>&#xA0;however the participants only fund the system with 1/30th of this amount. In other words, for every $1 that a participant institution places in the network, they are able to settle nearly $30 in value in seconds.</p>
<p>The economic benefits of this should be clear, particularly in an environment, such as today, where interest rates are rising but also the use of LSMs removes risks by enabling more settlements to occur with finality earlier in the day than might be the case if everything was settled gross.</p>
<p>In a conventional payments platform, there is a central authority such as a clearing house that is managing the offsetting and settlement process. However, if we envision a more decenteralised model based around wCBDCs, then it raises a question of what will be the wCBDC-equivalent of an LSM since, as it stands, wCBDCs are pre-funded and gross settle which is likely to have a significant and material negative impact on bank liquidity.</p>
<p>There is, of course, one possible scenario where the existing LSMs remain but, for settlement, they trigger the transfer of a wCBDC post-netting but this would perhaps weaken the argument of a wCBDC as an instrument of atomic DvP settlement and would also limit the resiliency benefits of a decenteralised system (which is one of the oft cited arguments for a wCBDC versus a centralised Real Time Gross Settlement system).</p>
<p>Whilst there was some early consideration of Distributed Ledger-based multilateral offsetting algorithms in which the offsetting is orchestrated programmatically by smart contracts, the immediate challenge is counter-party privacy. Not all banks should see all the individual transactions, particularly transactions for which they are not counter-parties.</p>
<p>However, there is a field of cryptography focused on zero-knowledge that could enable the development of decenteralised, privacy-preserving LSM.</p>
<p>Consider a simplified example where there are three banks A, B, and C. Bank A owes B $10, B owes C $10 and C owes A $3.</p>
<ol><li><strong>Compute Net Amounts:</strong>&#xA0;Each institution computes the net amounts owed to each of the counterparties.A owes B $10, and C owes A $3, so the net amount A owes is $10 - $3 = $7.B owes C $10, and A owes B $10, so the net amount that B owes is $10 - $10 = $0 (B does not owe any money, and is not owed any money).C owes A $3, and B owes C $10, so the net amount that C owes is $3 - $10 = -$7 (i.e. C is owed $7).</li><li><strong>Create&#xA0;<a href="https://link.springer.com/chapter/10.1007/3-540-46766-1_9?ref=abutler.com"><strong>Pedersen Commitments</strong></a>:</strong>&#xA0;Each bank creates Pedersen Commitments for each of the amounts owed. This cryptographic algorithm allows a party to commit to a certain value without revealing it to others but enabling it to be revealed later.</li><li><strong>Create&#xA0;<a href="https://z.cash/learn/what-are-zk-snarks/?ref=abutler.com"><strong>zkSNARKs</strong></a>&#xA0;and&#xA0;<a href="https://arxiv.org/abs/1907.06381?ref=abutler.com"><strong>Range Proofs</strong></a>:</strong>&#xA0;Each institution then creates zkSNARKs and Range Proofs, which together prove they know the net amount in the commitment and the blinding factor used to create that commitment (without revealing it), and that the net amount lies within some valid range (proven by the range proof).</li><li><strong>Share Commitments and zkSNARKs:</strong>&#xA0;Participants then share their commitments and zkSNARKs with their counter-party participants. Any observer, such as the network organiser or Central Bank, can validate without any information being disclosed.</li><li><strong>Verify zkSNARKs:</strong>&#xA0;The participants then verify the zkSNARKs of the other participants.</li><li><strong>Open Commitments:</strong>&#xA0;Once all the proofs have been validated, the banks open their commitments to reveal the net amounts and the blinding factors.</li><li><strong>Adjust Balances:</strong>&#xA0;The participants then adjust their balances based on the net amounts revealed. If these represent the payment legs of DvP transactions, then the movement of the assets would occur atomically at this point.</li><li><strong>Create and Share New zkSNARKs:</strong>&#xA0;Finally, participants create and share new zkSNARKs that prove their updated balances are correct, without revealing the actual amounts.</li><li><strong>Gross settlement.&#xA0;</strong>Finally, gross settlement would occur for the amounts owed, after the multilateral offsetting occurs. This could happen using a wCBDC or via the traditional payments infrastructure. In the example above, after multilateral offsetting, A would have to pay $7 to C (even though they didn&apos;t have an obligation to begin with).</li></ol>
<p>In short, it is like that decenteralised multilateral offsetting algorithms can be implemented in privacy-preserving ways (albeit with some computational overhead). As Central Banks continue to explore how central bank money can be reimagined (or not) as a wCBDC, it is also important to explore how technological advances might enable new approaches to liquidity saving; taking advantages of some of the ongoing advances in the field of zero knowledge proofs and secure multiparty computation to do so in privacy-preserving ways</p>]]></content:encoded></item><item><title><![CDATA[Money is a General Purpose Technology]]></title><description><![CDATA[<p>There are certain technologies that are characterised as&#xA0;<a href="https://en.wikipedia.org/wiki/General-purpose_technology?ref=abutler.com">General Purpose Technologies (GPTs)</a>.&#xA0; These are technologies, like electricity or the computer, that have a very large economic impact: often at a national or global level.&#xA0; They are characterised by their broad usefulness and applicability to different sectors and</p>]]></description><link>https://abutler.com/money-is-a-general-purpose-technology/</link><guid isPermaLink="false">64f175dbdd95232454d41a84</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Fri, 01 Sep 2023 05:26:04 GMT</pubDate><content:encoded><![CDATA[<p>There are certain technologies that are characterised as&#xA0;<a href="https://en.wikipedia.org/wiki/General-purpose_technology?ref=abutler.com">General Purpose Technologies (GPTs)</a>.&#xA0; These are technologies, like electricity or the computer, that have a very large economic impact: often at a national or global level.&#xA0; They are characterised by their broad usefulness and applicability to different sectors and applications; they are, in some sense, platforms or enablers on which economic value is built.&#xA0; The computer or electricity are good examples of GPTs.</p>
<p>&#xA0;Whilst perhaps not obvious or intuitive, money is also a&#xA0;<a href="https://en.wikipedia.org/wiki/General-purpose_technology?ref=abutler.com">General Purpose Technology (GPT)</a>&#xA0;and is arguably one of the first technologies that humans invented; fulfilling the functions of store of value, unit of account, and a medium of exchange.&#xA0; Man invented &#x201C;money&#x201D; in order to enable trade and solve a number of challenges, such as the&#xA0;<a href="https://en.wikipedia.org/wiki/Coincidence_of_wants?ref=abutler.com">coincidence of wants problem</a>&#xA0;where, in a barter economy, a buyer who owned chickens but wanted milk might find that the man with the milk didn&#x2019;t want the chickens but wanted beef.&#xA0; &#xA0;Hence, the invention of money provided a mechanism to solve this coincidence of wants problem in an efficient way as now money could be used as the medium of exchange.&#xA0;</p>
<p>As with the computer and electricity, it has also undergone constant change and evolution: we no longer, for example, use shells or precious metals as currency and even paper &#x2013; itself an innovation of the 13th century &#x2013; is progressively being replaced by digital transactions and digital forms of money.</p>
<p>The appearance of various forms of digitised money in recent years is therefore simply the natural continuation of a long arc of innovation that goes back many centuries.&#xA0; The discussions of Central Bank Digital Currencies (CBDCs) should be viewed in this context.</p>
<p>Blockchain is believed to be, like artificial intelligence, another General Purpose Technology that will have transformational impact across economies. Much of that impact is through the concept of &quot;tokenisation&quot; where assets, such as gold, real estate, or securities, are represented as programmable tokens on a distributed ledger that can be bought, traded, and used as collateral digitally. This is allowing existing financial markets to be reimagined in more liquid and efficient ways; and also for new markets to be created for asset classes where there has historically been limited liquidity or access. Switzerland&apos;s&#xA0;<a href="https://www.sdx.com/?ref=abutler.com">SIX Digital Exchange</a>&#xA0;is one example of a market that has been developed as a green-fields tokenised asset market.</p>
<p>The tokenisation of assets and the development of these new markets promises more efficient&#xA0;<a href="https://www.investopedia.com/terms/d/dvp.asp?ref=abutler.com#:~:text=Delivery%20versus%20payment%20is%20a,without%20the%20delivery%20of%20securities.">Delivery versus Payment (DvP)</a>&#xA0;trade settlement: where the title to a security moves atomically with payment and, by doing so, mitigates the counter-party risks that have often led to expensive intermediaries, rent-seeking, and other market inefficiencies. However, whilst the tokenisation of assets represents the &quot;delivery leg&quot;, how will the &quot;payment leg&quot; take place in this market?</p>
<p>The development of tokenised forms of money is therefore a possible response to this: as an enabler of atomic DvP settlement and a broad range of other use cases and value creation opportunities that flow from the tokenisation of assets.</p>
<p>Much of this payments innovation will happen in the private sector through the development of technologies such as tokenised bank deposits (which arguably offer a much less disruptive and more effective instrument than stable coins), stable coins, or other forms of instrument; however, to enable this innovation, there may still be a need for a digitally native form of Central Bank money (i.e. a Central Bank Digital Currency).</p>
<p>Firstly, the&#xA0;<a href="https://www.bis.org/pfmi/help/principleid.htm?ref=abutler.com">Principles of Financial Markets Infrastructure (PFMI) principles</a>&#xA0;requires (#9) that financial markets should conduct their settlements in central bank money. Therefore, it stands to reason that if markets are going to be tokenised, there is value in exploring the most efficient and effective for transactions in these new digital markets to be settled in central bank money.</p>
<figure class="kg-card kg-image-card"><img src="https://media.licdn.com/dms/image/D4E12AQF1uyZITjXBEw/article-inline_image-shrink_1500_2232/0/1693044377692?e=1698883200&amp;v=beta&amp;t=ITyCgjkO0r3tOZuoC15ZgCqPYsZMv7cvp6ATMitkesE" class="kg-image" alt loading="lazy"></figure>
<p>Secondly, there are private sector innovations that are likely to be enabled by a CBDC. For example, t<a href="https://www.pymnts.com/news/blockchain-distributed-ledger/2023/tokenized-deposits-gain-currency-around-the-globe/?ref=abutler.com#:~:text=Tokenized%20deposits%20are%20tied%20to,are%20recorded%20on%20distributed%20ledgers.">okenised deposits are a promising technology</a>&#xA0;that banks could introduce but, given they are claims on the issuing bank, there is a consideration of the&#xA0;<a href="https://www.bis.org/publ/bisbull73.htm?ref=abutler.com">&quot;singleness of money&quot;</a>&#xA0;that may arise if one bank is perceived as financially less secure than another and this is then priced into the token value. The ability to settle using a wholesale CBDC would be one mechanism by which this risk can be mitigated.</p>
<p>Thirdly, returning to the original point of money as a General Purpose Technology and building on the previous observation, the creation of a CBDC could, like the invention of prior forms of money, be viewed as a platform: an enabler of innovations that we may not be able to foresee yet and, by doing so, incremental economic value will be created through the lowering of transaction costs, new efficiencies, and the enablement of entirely new products and instruments.</p>
<p>As such, perhaps the strongest argument for the exploration of CBDC may be as a&#xA0;<a href="https://en.wikipedia.org/wiki/Public_good_(economics)?ref=abutler.com">public good</a>. This certainly seems to be the&#xA0;<a href="https://www.bankofengland.co.uk/quarterly-bulletin/2023/2023/enabling-innovation-through-a-digital-pound?ref=abutler.com">view of the Bank of England</a>&#xA0;as eloquently evidenced in their recent update on the Digital Pound initiative which articulates how a Digital Pound could enable new innovations by introducing the technology platform but also enabling a new market to convene around it. As part of this, standards would be set that would enable new and existing firms to innovate products and services that interact with this new payments technology whilst, at the same time, leading to new data being captured and being leveraged in new ways.</p>]]></content:encoded></item><item><title><![CDATA[ChatGPT is the ‘Netscape moment’ for artificial intelligence’]]></title><description><![CDATA[<p>Originally published in <a href="ChatGPT is the &#x2018;Netscape moment&#x2019; for artificial intelligence&#x2019;" rel="noreferrer">Arab News</a>.</p>
<p>It is impossible for anyone to have missed the excitement generated by ChatGPT. Countless articles on the subject have been written, including many by ChatGPT.</p>
<p>While underlying technologies, such as deep learning, are not new, ChatGPT&#x2019;s rich conversational interface has captured the popular</p>]]></description><link>https://abutler.com/chatgpt-is-the-netscape-moment-for-artificial-intelligence/</link><guid isPermaLink="false">64f173cedd95232454d41a6a</guid><dc:creator><![CDATA[Anthony Butler]]></dc:creator><pubDate>Sun, 28 May 2023 05:17:00 GMT</pubDate><content:encoded><![CDATA[<p>Originally published in <a href="ChatGPT is the &#x2018;Netscape moment&#x2019; for artificial intelligence&#x2019;" rel="noreferrer">Arab News</a>.</p>
<p>It is impossible for anyone to have missed the excitement generated by ChatGPT. Countless articles on the subject have been written, including many by ChatGPT.</p>
<p>While underlying technologies, such as deep learning, are not new, ChatGPT&#x2019;s rich conversational interface has captured the popular imagination around artificial intelligence in the same way Netscape made the World Wide Web real for millions worldwide when the browser first appeared in the 1990s.</p>
<p>ChatGPT is built on something called a Large Language Model.<br><br>LLMs are artificial models trained on huge corpuses of text using something called &#x201C;unsupervised learning,&#x201D; where they are not explicitly taught but fed these models&#x2019; text to learn the relationships between words and the underlying concepts, essentially developing a statistical model of what words are likely to follow other words given a particular prompt or starting point. In some sense, they seem like &#x201C;autocomplete on steroids.&#x201D;<br><br>Therefore, they are remarkably effective at giving responses to questions, summarizing texts and producing large amounts of text content based on some prompting. For example, we are seeing global law firms explore how these models can automatically create the skeletons of contracts without requiring legal or paralegals to draft.<br><br>We see articles in publications authored by AI and panicked university officials wondering about the high-tech plagiarism these generative AI tools will enable.<br><br>However, it is also important to remember their current shortcomings: these models &#x201C;understand&#x201D; the statistics of language and, through this, the relationship between words, but they do not have knowledge of the world, common sense or the ability to reason.<br><br>Hence, they cannot tackle riddles or perform complex mathematics, making them prone to &#x201C;hallucinations&#x201D; where they generate text that, while superficially plausible, might be completely false, offensive, or misleading.<br><br>For example, a model could generate a scientific paper that looks and feels like a research paper based entirely on nonsensical arguments and content, or, in a more nefarious example, these models could enable the mass production of highly plausible misinformation that could poison search engine results or mislead people in destructive or harmful ways.<br><br>As we look to the future, these models will continue to evolve rapidly. But they will need to be augmented by systems that, like humans, have common sense, an understanding of the world, some sense of ethics and the ability to reason. Moreover, it will bring them closer to how human minds operate, especially making near-instantaneous decisions, such as identifying an object in our field of vision or reading a sentence.<br><br>We also have a second type of slower thinking that requires more effort and is both conscious and logical. While the former closely resembles what we see today with LLM&#x2019;s ability to recognize words without understanding context deeply or semantics, the latter form of thinking is an emergent trajectory of AI research focused heavily on learning rules, such as the rules of physics or ethical behavior.<br><br>We are also seeing the emergence of foundation models, such as generative pre-trained transformers, which can be trained once and then extended and reused broadly at minimal marginal cost, for example, not requiring the vast amount of computational capability and power needed to prepare GPT or similar.<br><br>These AI foundations are similar to web, mobile and social. They are the next platform &#x2014; a foundation on which new value will be created through new applications made possible by this general-purpose technology.<br><br>Models, such as those underlying ChatGPT, can be enriched and extended with domain-specific or licensed data and embedded in applications to provide a new way of engaging with a business or product.<br><br>For example, one could take today&#x2019;s LLMs and train further on the corpus of consultancy and research reports across an entire government and allow employees to ask questions in natural language or generate presentations or materials &#x2014; without the need to re-engage a consultant. This accelerated adoption of AI comes at a critical inflection point when much of the world faces inflationary pressures and rapidly rising labor costs.<br><br>AI will enable systems and machines to learn how to perform tasks currently performed by humans so that firms can be more productive and reduce the reliance on increasingly expensive human labor.<br><br>AI, like automation, can also have a deflationary effect, making it a vital productivity lever during these challenging economic times. In the long term, we can also see that, unlike Saudi Arabia, many developed countries face a demographic challenge, leading to a rapidly aging population and a rapidly declining workforce.<br><br>It is easy to see how the widespread proliferation of AI can ensure these economies&#x2019; future sustainability and prosperity.<br><br>In a Saudi context, the broad recognition of the value of data and AI, as exemplified by organizations such as the Saudi Data and AI Authority, the extensive multi-decade efforts to train a cadre of Saudi engineers, scientists, and technologists, and the investments and programs launched by national champions such as Aramco to develop local AI capabilities, make the Kingdom exceptionally well placed to capture this opportunity.<br><br>For example, the Kingdom can lead in the localization and extension of LLMs to the languages and dialects of the region or explore how the knowledge embedded in domains in which the country is a natural leader, such as energy, can be used to build foundation models that can then be made available broadly.<br><br>If the lessons of the internet age are any guide, we are at an essential point in the evolution of AI. Though the best time to engage with AI was yesterday, the next best time was today. Therefore, all Saudi public and private sector entities must be encouraged to explore how this technology can create new value in their respective fields and industries.</p>]]></content:encoded></item></channel></rss>