<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[ThatTechGuy | Talk Tech with Enoch]]></title><description><![CDATA[ThatTechGuy | Talk Tech with Enoch offers tutorials, coding tips, and insights on software development, fintech, blockchain, and more, helping beginners and pros enhance their skills.]]></description><link>https://www.thatsametechguy.com</link><generator>RSS for Node</generator><lastBuildDate>Sun, 19 Apr 2026 08:39:59 GMT</lastBuildDate><atom:link href="https://www.thatsametechguy.com/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[Treating AI Like Any Other Dependency]]></title><description><![CDATA[Who this is for: Engineers building or operating LLM-powered features in production environments.
What you'll learn: How to think about LLM integrations as infrastructure dependencies, and the operational challenges around reliability, cost, and obse...]]></description><link>https://www.thatsametechguy.com/treating-ai-like-any-other-dependency</link><guid isPermaLink="true">https://www.thatsametechguy.com/treating-ai-like-any-other-dependency</guid><category><![CDATA[AI]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Artificial Intelligence]]></category><category><![CDATA[llm]]></category><category><![CDATA[large language models]]></category><dc:creator><![CDATA[Enoch Olutunmida]]></dc:creator><pubDate>Wed, 07 Jan 2026 00:54:47 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1767506082271/1f7cc884-22bc-47be-bb09-8b62f6cf5ade.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p><strong>Who this is for:</strong> Engineers building or operating LLM-powered features in production environments.</p>
<p><strong>What you'll learn:</strong> How to think about LLM integrations as infrastructure dependencies, and the operational challenges around reliability, cost, and observability.</p>
<hr />
<h2 id="heading-what-changes-when-ai-becomes-part-of-your-system">What changes when AI becomes part of your system</h2>
<p>Building an LLM-powered feature for a proof of concept (PoC) is like cooking for yourself, as opposed to running a restaurant. You can experiment and take shortcuts. If something breaks, you’re the only one who cares.</p>
<p>Deploying that same feature to production is fundamentally different. You're serving real users with real expectations. Latency matters. Consistency matters. Cost per request matters. When something breaks, users notice.</p>
<p>The gap between "works in Postman" and "handles thousands of requests per minute reliably" is where most teams struggle. Not because the models are weak, but because the systems were architected for the PoC environment and never properly redesigned for production load.</p>
<hr />
<h2 id="heading-understanding-the-shift-poc-vs-production">Understanding the shift: PoC vs production</h2>
<p>The difference between a proof of concept and production isn't just about scale. It's about operational maturity.</p>
<div class="hn-table">
<table>
<thead>
<tr>
<td>PoC</td><td>Production</td></tr>
</thead>
<tbody>
<tr>
<td>Single use case</td><td>Many user paths</td></tr>
<tr>
<td>Low traffic</td><td>Variable, bursty load</td></tr>
<tr>
<td>Manual testing</td><td>Continuous evaluation</td></tr>
<tr>
<td>Cost often ignored</td><td>Cost is a hard constraint</td></tr>
<tr>
<td>Failures tolerated</td><td>Failures managed</td></tr>
</tbody>
</table>
</div><p>In a PoC, you're validating whether something can work. In production, you're proving it works reliably under real-world conditions whilst meeting your SLAs and SLOs. That distinction changes everything about how you design and operate your system.</p>
<hr />
<h2 id="heading-treating-llms-as-dependencies-not-features">Treating LLMs as dependencies, not features</h2>
<p>Once you deploy code that calls an LLM API, it behaves like any other external service you depend on. Think about Stripe for payments or SendGrid for email. You don't assume these services are always fast or always available. You design around their constraints and build your system to handle their limitations gracefully.</p>
<p>LLM APIs need exactly the same treatment. When you make calls to an LLM, you're making network requests to something you don't control. These APIs have variable latency, rate limits, per-token pricing, and non-deterministic failures.</p>
<p>Once an LLM call sits in your critical path, your service inherits all these characteristics. Running LLM-powered features in production becomes less about prompt engineering and more about distributed systems design. You're building infrastructure that needs to be reliable, cost-effective, and observable.</p>
<h3 id="heading-where-llms-sit-in-your-architecture">Where LLMs sit in your architecture</h3>
<pre><code class="lang-text">User Request
    |
    v
Application Layer
    |
    +--&gt; Payment API (Stripe / Paystack)
    |
    +--&gt; Email Service (SendGrid)
    |
    +--&gt; LLM API (OpenAI / Anthropic / etc)
            |
            +--&gt; Latency (varies by request)
            +--&gt; Cost (per token)
            +--&gt; Rate limits (requests/min, tokens/min)
            +--&gt; Failure modes (timeouts, rate limits, errors, hallucinations)
</code></pre>
<p>LLMs sit alongside your other critical dependencies. They're infrastructure, not magic. Just another API that needs the same operational rigour as everything else in your stack.</p>
<hr />
<h2 id="heading-the-reliability-challenge-llms-fail-differently">The reliability challenge: LLMs fail differently</h2>
<p>Traditional APIs fail in obvious ways. Connection timeouts. 500 errors. Rate limit responses. Your monitoring catches these failures immediately.</p>
<p>LLM APIs often fail quietly[1]. The request succeeds with a 200 status code. The response is well-formed JSON that matches your expected schema. But the actual content is wrong, inconsistent, or low quality. From an infrastructure perspective, everything looks healthy. Your uptime metrics are green. Your error rates are low. But from a user's perspective, the feature is broken.</p>
<p>These quiet failures show up in ways that standard monitoring doesn't catch. Inconsistent responses to the same input[7]. Slow quality degradation over time. Catastrophic failures on edge case inputs that only surface when users report them. This requires different approaches to detection and handling.</p>
<h3 id="heading-what-reliability-means-for-llm-powered-features">What reliability means for LLM-powered features</h3>
<p>For features that depend on LLMs, reliability isn't just about keeping the service up. It includes maintaining consistent output quality[7], ensuring predictable behaviour across your input distribution, keeping latency acceptable under production load, and having well-defined fallback behaviour when things go wrong.</p>
<p>The failure modes you need to handle fall into several categories:</p>
<p><strong>Hard failures:</strong> Timeouts, 429 rate limit errors, 503 service unavailable responses.</p>
<p><strong>Soft failures:</strong> Latency creeping up, throughput degrading, response quality declining over time.</p>
<p><strong>Silent failures:</strong> Quality drift that shows up in your data but triggers no alerts or logs[1]. Just users slowly losing trust.</p>
<p><strong>Cost failures:</strong> Token usage growing faster than expected or faster than revenue, making the feature economically unsustainable.</p>
<p>Silent failures are particularly difficult. No alerts. No error logs. No obvious signs that something is wrong. Just users noticing the feature isn't as good as it used to be.</p>
<hr />
<h2 id="heading-designing-systems-that-handle-failure-gracefully">Designing systems that handle failure gracefully</h2>
<p>Production services never assume their dependencies are perfect. When Stripe is slow, you don't block the entire checkout flow indefinitely. When email delivery fails, you queue the message and retry with exponential backoff. The same principles apply to LLM calls.</p>
<p>You need hard timeouts on every request to enforce your latency requirements. If your feature needs to respond in under 2 seconds, your LLM call might need a 1 second timeout to leave room for other processing. You need fallback responses ready for when quality degrades or latency spikes beyond acceptable levels[2]. Dynamic request routing based on complexity and current system state helps here.</p>
<p>You need circuit breakers to stop retry storms when the LLM provider is having issues. And you need request routing logic that sends simple queries to faster, cheaper models whilst saving expensive, powerful models for complex tasks.</p>
<p>The goal isn't to prevent failure completely. That's impossible with any external dependency. The goal is to contain failures, make them predictable, and ensure they don't cascade through your system.</p>
<h3 id="heading-building-effective-guardrails">Building effective guardrails</h3>
<p>Guardrails help you catch problems before they reach users[2,4,5]. Input validation catches prompt injection attempts and adversarial inputs[6]. Output validation detects hallucinations and low-quality responses[3,4]. Consistency checks ensure responses align with expected patterns[7].</p>
<p>These guardrails need to be fast without adding significant latency to your request path. And they need to be reliable. A guardrail that fails open defeats the purpose. When building agentic AI systems that make multiple LLM calls per user action, guardrails become even more critical. One bad output early in the chain can cascade into increasingly worse outputs downstream.</p>
<hr />
<h2 id="heading-the-cost-challenge-budgeting-for-token-usage">The cost challenge: budgeting for token usage</h2>
<p>Traditional infrastructure costs scale with compute and storage. They're relatively predictable and easy to forecast. LLM costs work fundamentally differently.</p>
<p>Your LLM costs scale with request volume, input token count[9] (both prompt and context), output token count (completion length), and model choice. Premium models might cost 10-30x more per token than smaller models. Small architectural decisions compound quickly.</p>
<p>Adding 2,000 tokens of context to every request "just to be safe" across a million requests per day can add thousands of pounds to your monthly bill. Not setting reasonable output length limits means some requests might generate 2,000 token responses whilst others generate 200 tokens. Those long responses cost 10x more.</p>
<p>These aren't edge cases. They're systematic cost drivers you need to think about at design time, not after your first invoice arrives. When building systems that operate at scale, cost optimisation isn't optional. It's the difference between a viable product and one that burns money faster than it generates value.</p>
<h3 id="heading-making-caching-a-core-part-of-your-architecture">Making caching a core part of your architecture</h3>
<p>The fastest way to burn through your LLM budget is calling the API for the same thing multiple times. This is where caching becomes essential[8].</p>
<p>You need exact match caching for identical prompts. If 100 users ask the same question, hit the LLM once and serve the other 99 from cache. You need semantic similarity caching for near-duplicates. If users are asking essentially the same question with slightly different wording, you shouldn't need 50 separate LLM calls. Set appropriate TTLs based on how often your ground truth changes.</p>
<p>Not every request needs real-time execution. Batch processing trades latency for efficiency. Lower cost per request. Better rate limit utilisation. More predictable system behaviour. These are standard distributed systems patterns. They just matter more when you're paying per token.</p>
<p>Proper caching can significantly reduce LLM costs compared to naive implementations. The difference between profitable and unprofitable often comes down to how well you cache.</p>
<h3 id="heading-monitoring-token-usage">Monitoring token usage</h3>
<p>You need visibility into how tokens are actually being used[9]. Track input and output token counts per request. Break this down by feature, by user type, by request pattern. This tells you where costs are coming from and where optimisation efforts will have the most impact.</p>
<p>Look for outliers. If most requests use 500 input tokens but some use 5,000, investigate why. If some users consistently generate much longer outputs than others, understand what's driving that behaviour. Token counting should be instrumented in your code, not something you check manually in your provider's dashboard.</p>
<hr />
<h2 id="heading-the-observability-requirement-you-cant-improve-what-you-dont-measure">The observability requirement: you can't improve what you don't measure</h2>
<p>You can't debug what you can't see. In production, you need deep visibility into how your LLM integration is actually performing.</p>
<p>Track request latency at different percentiles. Knowing your median latency is useful, but knowing your 95th and 99th percentile latency tells you what your worst-case users experience. Track cost per request, per user, and per feature. Without this, you can't make informed decisions about which features are economically viable.</p>
<p>Track output quality metrics over time so you can detect degradation before users start complaining. Track token usage patterns showing your input and output distributions. Track failure rates broken down by type: timeouts, rate limits, error responses, and those silent quality failures.</p>
<p>Without this visibility, everything becomes guesswork. Cost optimisation turns into randomly trying things. Performance debugging becomes impossible. Quality issues go unnoticed until they've already impacted significant numbers of users.</p>
<h3 id="heading-building-observability-into-your-llm-integration">Building observability into your LLM integration</h3>
<pre><code class="lang-text">LLM Request
    |
    +--&gt; Metrics (latency at different percentiles, cost, token count)
    |
    +--&gt; Logs (sanitised inputs/outputs, model version, parameters)
    |
    +--&gt; Traces (end-to-end request flow, downstream dependencies)
    |
    +--&gt; Alerts (violations of your latency and quality targets, cost spikes)
</code></pre>
<p>Good observability turns the LLM from a black box into something you can reason about, debug, and continuously improve. You can see exactly where latency comes from. You can identify which features or request patterns drive costs. You can detect quality degradation early. You can make data-driven decisions about optimisations and architectural changes.</p>
<h3 id="heading-continuous-evaluation-matters">Continuous evaluation matters</h3>
<p>Testing once at launch isn't enough[3]. Your evaluation needs to be continuous because the system constantly drifts. Models get updated. User behaviour evolves. New edge cases emerge. Input distributions shift.</p>
<p>Set up automated quality checks that run against a representative sample of production traffic. Compare outputs over time to detect drift. Alert when quality metrics drop below acceptable thresholds. This doesn't mean evaluating every single request. But you need systematic, ongoing evaluation that catches problems before they become visible to users.</p>
<hr />
<h2 id="heading-production-traffic-behaves-differently-than-test-traffic">Production traffic behaves differently than test traffic</h2>
<p>Features that work perfectly in test environments often break in production. Usually not because the code changed. The inputs changed.</p>
<p>Production brings adversarial inputs where users try prompt injection or attempt to jailbreak your system[6]. It brings traffic spikes when your app gets featured or goes viral. Suddenly you're dealing with 10x or 100x your normal load. It brings integration latency where your database is slow or another service is degraded, making your LLM calls wait even though the LLM itself is fast. And it brings upstream changes: model updates from your provider, API changes you didn't expect, provider-side issues that impact your availability.</p>
<p>This is why testing once at launch isn't enough. The environment is always changing. Your system needs to adapt continuously.</p>
<hr />
<h2 id="heading-shifting-from-modelling-problems-to-operational-problems">Shifting from modelling problems to operational problems</h2>
<p>Once you ship LLM-powered features, the challenges are fundamentally operational. You're not spending most of your time making the model smarter or crafting the perfect prompt. You're handling timeout logic and retry behaviour with exponential backoff. You're implementing circuit breakers and fallback paths. You're setting cost budgets with alerting thresholds. You're building cache invalidation strategies. You're adding token counting middleware to prevent expensive requests. You're versioning prompts so you can roll back when quality drops. You're building quality regression detection into your monitoring.</p>
<p>The model's capabilities still matter. But they're not your bottleneck. Your bottleneck is whether you can operate this reliably at scale whilst keeping costs under control and maintaining the quality your users expect.</p>
<p>Production incidents often reveal that the model is working perfectly, but timeout handling is missing and requests hang for extended periods, degrading the entire service. Uptime metrics show green because technically the service is up, but users can't complete actions because requests aren't completing.</p>
<p>Costs can spike significantly when proper caching isn't implemented and the API is redundantly called for the same prompts thousands of times. The feature works, but it isn't economically sustainable.</p>
<p>Quality can degrade over several days when there's no automated way to detect drift. By the time users complain, significant portions of the user base have been impacted. The model hasn't changed. The prompts haven't changed. But the input distribution has shifted in ways the system can't detect.</p>
<p>These aren't hypothetical scenarios. They're real problems that happen when you treat LLMs as something special instead of as infrastructure that needs proper operational discipline.</p>
<p>The work of running LLMs in production isn't about making the model smarter. It's about building the infrastructure, monitoring, and operational processes around it so it runs reliably, cost-effectively, and predictably at scale.</p>
<hr />
<h2 id="heading-references">References</h2>
<p>[1] Vinay, V. (2025). Failure Modes in LLM Systems: A System-Level Taxonomy for Reliable AI Applications. Microsoft Security Research. https://arxiv.org/pdf/2511.19933</p>
<p>[2] OpenAI. (2023). How to use guardrails. OpenAI Cookbook. https://cookbook.openai.com/examples/how_to_use_guardrails</p>
<p>[3] OpenAI. (2025). Receipt inspection: Eval-driven system design. OpenAI Cookbook. https://cookbook.openai.com/examples/partners/eval_driven_system_design/receipt_inspection</p>
<p>[4] OpenAI. (2024). Developing hallucination guardrails. OpenAI Cookbook. https://cookbook.openai.com/examples/developing_hallucination_guardrails</p>
<p>[5] OpenAI. (2025). GPT OSS safeguard guide. OpenAI Cookbook. https://cookbook.openai.com/articles/gpt-oss-safeguard-guide</p>
<p>[6] Anthropic. (n.d.). Mitigate jailbreaks. Claude Documentation. https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/mitigate-jailbreaks</p>
<p>[7] Anthropic. (n.d.). Increase consistency. Claude Documentation. https://platform.claude.com/docs/en/test-and-evaluate/strengthen-guardrails/increase-consistency</p>
<p>[8] Anthropic. (n.d.). Prompt caching. Claude Documentation. https://platform.claude.com/docs/en/build-with-claude/prompt-caching</p>
<p>[9] Anthropic. (n.d.). Token counting. Claude Documentation. https://platform.claude.com/docs/en/build-with-claude/token-counting</p>
]]></content:encoded></item><item><title><![CDATA[How I Would Explain Recursion To A Musician]]></title><description><![CDATA[After nearly a decade of building software systems for startups and contributing to projects used by enterprise companies, I’ve learned that the best explanations don’t come from textbooks. They come from connecting what someone already knows to what...]]></description><link>https://www.thatsametechguy.com/how-i-would-explain-recursion-to-a-musician</link><guid isPermaLink="true">https://www.thatsametechguy.com/how-i-would-explain-recursion-to-a-musician</guid><category><![CDATA[music]]></category><category><![CDATA[Recursion]]></category><category><![CDATA[software development]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[Orchestration]]></category><dc:creator><![CDATA[Enoch Olutunmida]]></dc:creator><pubDate>Fri, 27 Jun 2025 12:45:50 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1751027984024/470e12c7-27de-4a4b-83be-bd5b391083c8.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<p>After nearly a decade of building software systems for startups and contributing to projects used by enterprise companies, I’ve learned that the best explanations don’t come from textbooks. They come from connecting what someone already knows to what they need to learn.</p>
<p>Recently, I revisited a conversation I had with a talented musician who was diving into programming. When we reached the topic of recursion, I watched his eyes glaze over at the typical "a function that calls itself" explanation. That's when I realised: musicians already understand recursion. They just call it something else.</p>
<h2 id="heading-the-musical-foundation">The Musical Foundation</h2>
<p>Every musician knows the power of repetition with variation. Think about Pachelbel's Canon, where the same harmonic progression repeats endlessly, each time with new melodic layers. Or consider how a jazz musician takes a theme, plays it, then plays variations that reference the original theme, building complexity through controlled repetition.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751020733332/ffb01d44-1d24-47ec-a230-2d1935a7f920.png" alt class="image--center mx-auto" /></p>
<p><strong><em>Johann Pachelbel, Canon in D</em></strong> <em>(score image from</em> <a target="_blank" href="http://PianoTV.net"><em>PianoTV.net</em></a><em>, Canon in D Piano Tutorial). Accessed June 27, 2025.</em></p>
<p>This is recursion in its purest form: a pattern that references itself while moving toward a resolution.</p>
<h2 id="heading-the-conductors-recursion">The Conductor's Recursion</h2>
<p>Imagine you're performing a piece where the composer has written: "Repeat this passage, but each time, play it softer than before. Stop when you can barely hear the notes."</p>
<p><img src="https://media1.giphy.com/media/v1.Y2lkPTc5MGI3NjExcDgwdW56OGl5ZmV1Ym5pcnR2Nmt0NTY4N2h2ZHl4dWFsanBvOHBjOCZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/d3mnIV9Tf6MI7XDq/giphy.gif" alt="via GIPHY" /></p>
<p>Here's what's happening:</p>
<ul>
<li><p><strong>The recursive call</strong>: The instruction to repeat the passage</p>
</li>
<li><p><strong>The changing state</strong>: Each repetition is softer than the last</p>
</li>
<li><p><strong>The base case</strong>: Stop when the volume reaches near silence</p>
</li>
<li><p><strong>The resolution</strong>: The music naturally fades to its conclusion</p>
</li>
</ul>
<p>This is exactly how recursion works in code. Let me show you with a simple example:</p>
<pre><code class="lang-javascript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">playPassage</span>(<span class="hljs-params">volume</span>) </span>{
    <span class="hljs-keyword">if</span> (volume &lt;= <span class="hljs-number">0</span>) {
        <span class="hljs-comment">// Base case: too quiet to continue</span>
        <span class="hljs-keyword">return</span> <span class="hljs-string">"Silence"</span>
    }

    <span class="hljs-comment">// Play the current passage</span>
    performMusic(volume)

    <span class="hljs-comment">// Recursive call: play again, but softer</span>
    <span class="hljs-keyword">return</span> playPassage(volume - <span class="hljs-number">10</span>)
}
</code></pre>
<h2 id="heading-the-fractal-nature-of-musical-structure">The Fractal Nature of Musical Structure</h2>
<p>Here's where it gets beautiful. Music itself is recursive at multiple levels. A symphony has movements, movements have sections, sections have phrases, phrases have motifs. Each level contains and references the patterns of the levels around it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1751026660487/2af85d68-121c-4200-8f2a-246174f44c32.webp" alt class="image--center mx-auto" /></p>
<p><strong><em>"Classical Forms"</em></strong> <em>Tina Christie Flute, Tina Christie,</em> <a target="_blank" href="https://tinachristieflute.com/music-eras/classical-era/classical-forms/"><em>https://tinachristieflute.com</em></a><em>. Accessed 27 June, 2025.</em></p>
<p>Consider how a fugue works. Bach takes a simple subject (theme) and weaves it through different voices, each entry calling back to the original while creating something entirely new. The subject appears, then appears again in a different voice, then again transformed, until the entire piece resolves.</p>
<p>This is recursive thinking: solving a complex musical problem by breaking it into smaller, similar problems.</p>
<h2 id="heading-the-stack-overflow-of-music">The Stack Overflow of Music</h2>
<p>Every musician has experienced the musical equivalent of a stack overflow error. It happens when you get caught in a practice loop, playing the same difficult passage over and over without progress, until your muscle memory breaks down and you can't play anything correctly.</p>
<p>In programming, this happens when we forget the base case. The function keeps calling itself forever until the system runs out of memory. In music, it happens when we practice without intention, repeating without moving toward resolution.</p>
<p>The solution in both cases is the same: define clear stopping conditions and ensure each iteration moves us closer to our goal.</p>
<h2 id="heading-practical-recursion-the-musical-tree">Practical Recursion: The Musical Tree</h2>
<p>Think about how a melody develops. A composer starts with a simple phrase, then creates variations: inversions, augmentations, diminutions. Each variation references the original but adds something new.</p>
<p>This is exactly how recursive algorithms work with data structures like trees. Each node contains data and references to smaller versions of the same structure. When we traverse a musical tree of variations, we're using recursive thinking: "Process this variation, then process all its sub-variations the same way."</p>
<h2 id="heading-why-this-matters-beyond-code">Why This Matters Beyond Code</h2>
<p>Understanding recursion through music isn't just about making programming concepts accessible. It's about recognising that the patterns we use to create software mirror the patterns humans have used to create art for centuries.</p>
<p>When I'm architecting systems for large enterprises, I often think like a composer. How can I break this complex problem into smaller, self-similar pieces? How can I create patterns that reference themselves while building toward a resolution? How can I ensure my "performance" doesn't get stuck in an infinite loop?</p>
<h2 id="heading-the-resolution">The Resolution</h2>
<p>Recursion isn't just a programming technique. It's a way of thinking about problems that musicians, artists, and creators have used intuitively for generations. By connecting it to musical concepts, we're not simplifying recursion; we're revealing its deeper elegance.</p>
<p>The next time you encounter a recursive problem, think like a musician. What's your theme? How will it develop? Where will it resolve? And most importantly, how will you know when to stop?</p>
<p>Because in both music and code, the most beautiful solutions are often the ones that know exactly when to end.</p>
]]></content:encoded></item><item><title><![CDATA[Microservices vs. Monoliths: Making the Right Choice for Your Engineering Needs.]]></title><description><![CDATA[In the ever-evolving field of software engineering, you may have found yourself at a crossroads where you must decide: microservices or monoliths? Having been in this position a few times myself, I've gathered valuable insights along the way. Whether...]]></description><link>https://www.thatsametechguy.com/microservices-vs-monoliths-making-the-right-choice-for-your-engineering-needs</link><guid isPermaLink="true">https://www.thatsametechguy.com/microservices-vs-monoliths-making-the-right-choice-for-your-engineering-needs</guid><category><![CDATA[Microservices]]></category><category><![CDATA[Software Engineering]]></category><category><![CDATA[monolithic architecture]]></category><category><![CDATA[scaling]]></category><category><![CDATA[Modular Monolith]]></category><category><![CDATA[System Design]]></category><dc:creator><![CDATA[Enoch Olutunmida]]></dc:creator><pubDate>Wed, 30 Oct 2024 16:49:38 GMT</pubDate><content:encoded><![CDATA[<p>In the ever-evolving field of software engineering, you may have found yourself at a crossroads where you must decide: microservices or monoliths? Having been in this position a few times myself, I've gathered valuable insights along the way. Whether you’re just starting out or looking to improve your current processes, let’s dive into this topic and explore when microservices could be the perfect addition to your engineering needs.</p>
<h2 id="heading-understanding-monoliths">Understanding Monoliths</h2>
<p>Monoliths can be a great choice, especially in the early stages of your project. They offer simplicity in development and deployment. However, as your application grows, monolithic architectures can create several bottlenecks:</p>
<ul>
<li><p><strong>Tight Coupling</strong>: Changes in one part of the application can affect others, complicating development.</p>
</li>
<li><p><strong>Long Deployment Cycles</strong>: Coordinating updates across a large codebase can slow down deployment times.</p>
</li>
<li><p><strong>Scaling Challenges</strong>: Scaling a monolith usually means scaling the entire application, which can waste resources.</p>
</li>
</ul>
<p>If your team is experiencing frustrations with these limitations, it may be time to reconsider your architecture.</p>
<h3 id="heading-understanding-microservices">Understanding Microservices</h3>
<p>Microservices enable you to break down a large application into smaller, manageable services that can be developed, deployed, and scaled independently. Each service focuses on a single responsibility, allowing teams to build and deploy features faster. However, adopting microservices isn’t always the best choice. Here’s when you should consider them.</p>
<h3 id="heading-when-to-adopt-microservices">When to Adopt Microservices</h3>
<p><strong>Complexity and Scale:</strong> If you anticipate significant growth in your application or the need for diverse functionalities, microservices can help manage that complexity. They allow you to scale services independently, which can be more efficient than scaling an entire monolith.</p>
<p><strong>Team Structure:</strong> Larger teams can really benefit from adopting microservices. When multiple teams work on different services at the same time, it speeds up the development process significantly. Just ensure your teams have experience with distributed systems to avoid common pitfalls, such as communication overhead and misaligned objectives.</p>
<p><strong>Rapid Deployment Needs:</strong> If you need to roll out frequent updates and features, microservices enable independent deployments. This approach reduces the risks associated with large, coordinated deployments that are typical of monolithic architectures.</p>
<p><strong>Technology Diversity:</strong> If your project requires different technologies for various parts of the application, microservices offer the flexibility to choose the best tool for each service without being constrained by a uniform technology stack.</p>
<p><strong>Autonomy and Isolation:</strong> With microservices, each service can evolve at its own pace. This autonomy promotes faster iterations and reduces the risk of changes in one service impacting others.</p>
<h3 id="heading-when-microservices-become-more-of-a-liability-than-an-asset">When Microservices Become More of a Liability than an Asset</h3>
<p><strong>Simple Applications:</strong> For small, straightforward applications, a monolithic architecture may be more efficient, as the overhead of managing multiple services might outweigh the benefits.</p>
<p><strong>Limited Resources:</strong> Transitioning to microservices requires investment in both infrastructure and human capital. The costs associated with managing multiple services — including deployment and monitoring tools — can add up quickly. Additionally, it’s crucial to have a team experienced in managing distributed systems, container orchestration, and DevOps practices to ensure a smooth transition and effective operation.</p>
<p><strong>Tight Deadlines:</strong> If you’re under time constraints, implementing microservices can introduce unnecessary complexity. A monolithic approach allows for quicker development and deployment in such scenarios.</p>
<p><strong>Low Scalability Requirements:</strong> If your application is unlikely to experience varying loads across different components, a monolith may suffice. Scaling a monolith is simpler in such cases, as you won't need to worry about inter-service communication and coordination.</p>
<p><strong>Maintenance and Technical Debt:</strong> Microservices can introduce maintenance challenges. If your team isn’t ready to handle the added complexity, you might find yourself accumulating more technical debt, which can hinder development in the long run.</p>
<h2 id="heading-finding-a-middle-ground">Finding a Middle Ground</h2>
<p>For teams caught between the benefits of monoliths and the scalability that microservices offer, a <strong>modular monolith</strong> can serve as an effective middle ground. This approach allows you to develop your application within a single codebase while still maintaining separation of concerns through well-defined modules. By doing so, you can tackle some of the drawbacks of a traditional monolith — such as tight coupling and long deployment cycles — while avoiding the complexities and overhead costs typically associated with microservices.</p>
<h2 id="heading-starting-fresh-monolith-or-microservices">Starting Fresh: Monolith or Microservices?</h2>
<p>When embarking on a new project, it’s essential to weigh your options carefully. Starting with a monolithic architecture can provide simplicity and speed during initial development, allowing you to validate your ideas and build your product—especially if you’re a startup aiming to get to market quickly. Once your application matures and you have a better understanding of your scaling needs, you can gradually refactor into microservices if necessary.</p>
<p>On the other hand, if you are developing a large-scale application with significant anticipated growth and have the financial resources, skilled personnel, and infrastructure to support a microservices architecture, you might choose to implement it from the outset. However, be prepared for the complexities involved, such as managing inter-service communication, ensuring data consistency, and implementing effective monitoring and logging solutions.</p>
<h2 id="heading-final-thoughts">Final Thoughts</h2>
<p>Ultimately, the decision between microservices and monoliths is a balancing act. Microservices offer flexibility and scalability but come with increased complexity. Evaluating your project’s goals, your team’s capabilities, and the long-term vision is crucial for making the right choice.</p>
<p>I remember an occurrence from about two years ago when I joined a project halfway through as a contractor. The monolithic setup we had became more of a liability than an asset, as it created specific bottlenecks, such as difficulties in scaling individual features. The project started to feel like an anchor, weighing the team down and making it challenging to implement changes without worrying about unintended side effects. Additionally, onboarding new developers became increasingly difficult due to the steep learning curve associated with a large, tightly coupled codebase.</p>
<p>In evaluating our options, we performed a cost-benefit analysis and decided that microservices were the way to go, leading to significant improvements in the team's efficiency. We approached the transition with a phased strategy, extracting individual services incrementally to minimise disruption. However, the journey was not without challenges, such as ensuring effective communication between services and managing the new complexities introduced by a distributed system. Thus, it’s essential to weigh the trade-offs carefully when considering such a significant architectural shift.</p>
<p>Before making a decision, take the time to assess your current situation. A thoughtful approach will not only address your immediate needs but also set you up for future success. Remember, the best architecture is the one that aligns with your project’s unique requirements.</p>
]]></content:encoded></item><item><title><![CDATA[Deleting the Middle Node of a Linked List Using the Tortoise and Hare Algorithm in TypeScript]]></title><description><![CDATA[Introduction
In this post, we’ll be solving LeetCode Problem 2095: “Delete the Middle Node of a Linked List.”
The problem statement on LeetCode is:

You are given the head of a linked list. Delete the middle node and return the head of the modified l...]]></description><link>https://www.thatsametechguy.com/deleting-the-middle-node-of-a-linked-list-using-the-tortoise-and-hare-algorithm-in-typescript</link><guid isPermaLink="true">https://www.thatsametechguy.com/deleting-the-middle-node-of-a-linked-list-using-the-tortoise-and-hare-algorithm-in-typescript</guid><category><![CDATA[leetcode]]></category><category><![CDATA[leetcode-solution]]></category><category><![CDATA[coding challenge]]></category><category><![CDATA[coding]]></category><category><![CDATA[interview questions]]></category><category><![CDATA[coding interview]]></category><category><![CDATA[linked list]]></category><category><![CDATA[two pointers]]></category><category><![CDATA[DSA]]></category><category><![CDATA[algorithms]]></category><dc:creator><![CDATA[Enoch Olutunmida]]></dc:creator><pubDate>Sat, 26 Oct 2024 02:32:05 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729909368543/caf0a912-5d8e-4cac-b177-2f7cde637b57.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h3 id="heading-introduction">Introduction</h3>
<p>In this post, we’ll be solving LeetCode Problem 2095: “Delete the Middle Node of a Linked List.”</p>
<p>The problem statement on LeetCode is:</p>
<blockquote>
<p>You are given the head of a linked list. Delete the middle node and return the head of the modified linked list.</p>
</blockquote>
<p>In essence, our goal is to remove the middle node from a singly linked list using an efficient approach that finds the middle without traversing the list multiple times.</p>
<p>To solve it, we’ll employ the Tortoise and Hare (Two-Pointer) algorithm, a technique commonly used for linked list problems involving a slow and a fast pointer. This blog covers the code, essential edge cases, and explains how the two-pointer logic works visually.</p>
<p>If you would prefer a video walkthrough, <a target="_blank" href="https://youtu.be/v9OVEjKngL8?si=Nfds5EqO6nQbRH1k"><strong>check out my YouTube video here</strong></a>, where I explain the solution step-by-step!</p>
<h3 id="heading-problem-breakdown">Problem Breakdown</h3>
<p>The task requires us to:</p>
<ul>
<li><p>Traverse the linked list to locate the middle node.</p>
</li>
<li><p>Remove this middle node without needing to re-traverse the entire list.</p>
</li>
</ul>
<p><strong>Constraints:</strong> The linked list will have at least one node.</p>
<p>To tackle this efficiently, we’ll rely on two pointers:</p>
<ul>
<li><p><strong>Fast Pointer (Hare):</strong> Moves two steps at a time.</p>
</li>
<li><p><strong>Slow Pointer (Tortoise):</strong> Moves one step at a time.</p>
</li>
</ul>
<p>When the fast pointer reaches the end of the list, the slow pointer will be at the middle node.</p>
<h3 id="heading-step-by-step-solution-using-the-tortoise-and-hare-algorithm">Step-by-Step Solution Using the Tortoise and Hare Algorithm</h3>
<h4 id="heading-step-1-initialise-pointers">Step 1: Initialise Pointers</h4>
<p>We initialise:</p>
<ol>
<li><p><code>slowPointer</code> and <code>fastPointer</code> both pointing to the head of the linked list.</p>
</li>
<li><p><code>prevPointer</code> as <code>null</code>, to keep track of the node before the slow pointer.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729907050224/bc12c90c-6595-4ed7-95db-025fc7431ccb.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-2-traverse-the-list">Step 2: Traverse the List</h4>
<p>With each iteration:</p>
<ul>
<li><p>Update <code>prevPointer</code> to point to the current <code>slowPointer</code> before it moves forward.</p>
</li>
<li><p>Move <code>slowPointer</code> one step forward.</p>
</li>
<li><p>Move <code>fastPointer</code> two steps forward.</p>
</li>
</ul>
<p>This setup ensures that by the time <code>fastPointer</code> reaches the end of the list, <code>slowPointer</code> will be at the middle node, and <code>prevPointer</code> will be one step behind it.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729907074656/62306be3-e87d-4d21-8464-c15c84bbddf7.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-3-remove-the-middle-node">Step 3: Remove the Middle Node</h4>
<p>Once the traversal is complete:</p>
<ol>
<li><p><strong>Confirm that</strong> <code>slowPointer</code> is at the middle node.</p>
</li>
<li><p><strong>Adjust Pointers:</strong> Set <code>prevPointer.next = slowPointer.next</code> to "skip" the middle node (the one <code>slowPointer</code> is currently pointing to), effectively deleting it from the list.</p>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729907849603/1f8f62c9-c44d-4777-8c2e-b21d8055ac4d.png" alt class="image--center mx-auto" /></p>
<h4 id="heading-step-4-edge-case-single-node-list">Step 4: Edge Case - Single Node List</h4>
<p>If the list has only one node, <code>fastPointer</code> won’t be able to move two steps. In this case:</p>
<ul>
<li>Simply return <code>null</code> as removing the middle node from a single-node list leaves the list empty.</li>
</ul>
<h3 id="heading-code-implementation-in-typescript">Code Implementation in TypeScript</h3>
<p>Here’s how the solution looks in TypeScript:</p>
<pre><code class="lang-typescript"><span class="hljs-function"><span class="hljs-keyword">function</span> <span class="hljs-title">deleteMiddle</span>(<span class="hljs-params">head: ListNode | <span class="hljs-literal">null</span></span>): <span class="hljs-title">ListNode</span> | <span class="hljs-title">null</span> </span>{
    <span class="hljs-keyword">if</span> (!head.next) {
        <span class="hljs-keyword">return</span> <span class="hljs-literal">null</span>;
    }

    <span class="hljs-keyword">let</span> slowPointer = head;
    <span class="hljs-keyword">let</span> fastPointer = head;
    <span class="hljs-keyword">let</span> prevPointer = <span class="hljs-literal">null</span>;

    <span class="hljs-keyword">while</span> (fastPointer &amp;&amp; fastPointer.next) {
        prevPointer = slowPointer;
        slowPointer = slowPointer.next;
        fastPointer = fastPointer.next.next;
    }

    <span class="hljs-keyword">if</span> (prevPointer) {
        prevPointer.next = slowPointer.next;
    }

    <span class="hljs-keyword">return</span> head;
};
</code></pre>
<h3 id="heading-explanation-of-the-code">Explanation of the Code</h3>
<ul>
<li><p><strong>Edge Case Handling:</strong> The first check ensures that if the list has only one node (or none), it returns <code>null</code>.</p>
</li>
<li><p><strong>Traversing the List:</strong> The <code>while</code> loop moves <code>fastPointer</code> by two steps and <code>slowPointer</code> by one step. At the same time, <code>prevPointer</code> updates to stay one step behind <code>slowPointer</code>.</p>
</li>
<li><p><strong>Node Deletion:</strong> After the loop, <code>prevPointer.next</code> is set to <code>slowPointer.next</code>, effectively removing the middle node from the list.</p>
</li>
</ul>
<h3 id="heading-complexity-analysis">Complexity Analysis</h3>
<ul>
<li><p><strong>Time Complexity:</strong> O(n) because we’re traversing the list with the two pointers just once.</p>
</li>
<li><p><strong>Space Complexity:</strong> O(1) as no additional space is used beyond the three pointers.</p>
</li>
</ul>
<h3 id="heading-edge-cases-and-additional-considerations">Edge Cases and Additional Considerations</h3>
<ol>
<li><p><strong>Single Node List:</strong> As mentioned, returning <code>null</code> when there’s only one node in the list.</p>
</li>
<li><p><strong>Two Node List:</strong> After removing the middle node, the list should contain just one node.</p>
</li>
<li><p><strong>Odd vs. Even Length Lists:</strong> For an odd-length list, the exact middle node is removed. For an even-length list, the second of the two central nodes is considered the middle.</p>
</li>
</ol>
<h3 id="heading-conclusion">Conclusion</h3>
<p>The Tortoise and Hare algorithm provides an efficient solution to this problem by requiring only a single traversal to locate and delete the middle node. This approach is also valuable for other linked list problems, such as detecting cycles or identifying the start of a loop.</p>
<p>Mastering this algorithm will deepen your understanding of linked list problem-solving, a skill that often proves valuable in coding interviews.</p>
]]></content:encoded></item></channel></rss>