<?xml version="1.0" encoding="utf-8" standalone="yes"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Techblog on HeadGym : AI for ALL</title>
    <link>https://headgym.com/tags/techblog/</link>
    <description>Recent content in Techblog on HeadGym : AI for ALL</description>
    <generator>Hugo</generator>
    <language>en-us</language>
    <atom:link href="https://headgym.com/tags/techblog/index.xml" rel="self" type="application/rss+xml" />
    <item>
      <title>Prompt Architectures</title>
      <link>https://headgym.com/blog/posts/tech/prompt-architectures/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://headgym.com/blog/posts/tech/prompt-architectures/</guid>
      <description>&lt;p&gt;Structuring prompts effectively for AI is essential for getting the desired outcomes. Different prompt frameworks can help you craft prompts that are clear, concise, and likely to yield relevant and accurate responses.&lt;/p&gt;&#xA;&lt;h1 id=&#34;background&#34;&gt;Background&lt;/h1&gt;&#xA;&lt;p&gt;These strategies were extracted for a client side UI that I am working on which utilises these structures to render forms based on a strategy notation (more on that later).&lt;/p&gt;&#xA;&lt;p&gt;My interest in these is prompted by my work on midJourney where using image references rather than prompts I was able to extract a very unique and controlled set of outcomes. My observation from the exercise with MidJourney was that interaction models and a good understanding of the Model itself can significantly improve outcomes.&lt;/p&gt;</description>
    </item>
    <item>
      <title>Inference Router</title>
      <link>https://headgym.com/content/en/techblog/inference-router/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://headgym.com/content/en/techblog/inference-router/</guid>
      <description>&lt;p&gt;As large language models (LLMs) continue to permeate software architectures, a new set of challenges has emerged around optimising their performance, cost, and management at scale. In particular, if you’re working in non-trivial, high-volume environments as we do at &lt;a href=&#34;https://headgym.com/&#34;&gt;HeadGym&lt;/a&gt;, efficiently managing the routing of LLM requests is not just an optimisation — it’s a necessity.&lt;/p&gt;&#xA;&lt;p&gt;At the heart of these optimisations is the &lt;strong&gt;inference router&lt;/strong&gt;, a rather simple yet powerful concept that is becoming increasingly indispensable in the LLM ecosystem. Whether it’s routing traffic to the least expensive model, capturing analytics, or adding governance controls, the inference router serves as a critical middleware layer that streamlines and optimises how LLMs are used in production environments.&lt;/p&gt;</description>
    </item>
    <item>
      <title>PostgREST</title>
      <link>https://headgym.com/content/en/techblog/postgrest/</link>
      <pubDate>Mon, 01 Jan 0001 00:00:00 +0000</pubDate>
      <guid>https://headgym.com/content/en/techblog/postgrest/</guid>
      <description>&lt;p&gt;While exploring the Supabase documentation, I discovered that Supabase leverages &lt;strong&gt;PostgREST&lt;/strong&gt;, a powerful yet often overlooked tool. Despite its relative obscurity, PostgREST is incredibly valuable for quickly building prototypes and scaling data-driven applications. In this article, I’ll introduce PostgREST and walk you through how to start using it with &lt;strong&gt;Supabase&lt;/strong&gt;.&lt;/p&gt;&#xA;&lt;p&gt;&lt;strong&gt;PostgREST&lt;/strong&gt; is an open-source tool that automatically turns your PostgreSQL database into a RESTful API. By leveraging PostgreSQL’s powerful features such as stored procedures, views, and triggers, PostgREST enables seamless interaction with your database through standard HTTP methods (GET, POST, PATCH, DELETE), without the need for custom backend code.&lt;/p&gt;</description>
    </item>
  </channel>
</rss>
