<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0"><channel><title><![CDATA[Amador Criado's blog]]></title><description><![CDATA[Documenting my journey as a Cloud Architect: real-world projects, design principles, and cloud-native solutions.]]></description><link>https://amatore.dev</link><generator>RSS for Node</generator><lastBuildDate>Wed, 22 Apr 2026 05:29:37 GMT</lastBuildDate><atom:link href="https://amatore.dev/rss.xml" rel="self" type="application/rss+xml"/><language><![CDATA[en]]></language><ttl>60</ttl><item><title><![CDATA[RAG Chatbot with Amazon Bedrock & LangChain]]></title><description><![CDATA[Introduction
Large Language Models (LLMs) are revolutionizing how we interact with information, but they face challenges with accuracy and access to up-to-date data. Retrieval Augmented Generation (RAG) addresses these limitations by grounding the LL...]]></description><link>https://amatore.dev/rag-chatbot-with-amazon-bedrock-langchain</link><guid isPermaLink="true">https://amatore.dev/rag-chatbot-with-amazon-bedrock-langchain</guid><category><![CDATA[AWS Bedrock]]></category><category><![CDATA[AWS]]></category><category><![CDATA[llm]]></category><category><![CDATA[RAG ]]></category><category><![CDATA[streamlit]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Mon, 04 Nov 2024 15:24:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1730733367935/f43081e6-f604-4792-8d22-8c4c64af0ee3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Large Language Models (LLMs) are revolutionizing how we interact with information, but they face challenges with accuracy and access to up-to-date data. <strong>Retrieval Augmented Generation (RAG)</strong> addresses these limitations by grounding the LLM's responses in a dedicated knowledge base.</p>
<p>This article explores a project based on the implementation of a RAG chatbot using <strong>Amazon Bedrock</strong> and <strong>LangChain</strong> that enhances the chatbot's ability to provide more contextually relevant, and current information, making it a key approach for a wide range of applications based on generative AI. The full codebase of the project is available in my <a target="_blank" href="https://github.com/acriado-dev/amazon-bedrock-rag-chatbot">GitHub repository</a>.</p>
<h2 id="heading-design-considerations">Design considerations</h2>
<p>As can be seen in other articles of my Blog, I usually review the design considerations of any project, workload or cloud architecture showcase in order to provide more wide vision about the main purpose of it. For this case, the following design principles guided the development:</p>
<ul>
<li><p><strong>Serverless LLM Architecture :</strong> Leveraging Amazon Bedrock's serverless infrastructure eliminates the need for managing infrastructure, simplifying deployment, scaling, and experimentation. This allows developers to focus on building the core logic of the chatbot without worrying about server management.</p>
</li>
<li><p><strong>Modularity:</strong> The codebase is designed with modularity in mind, facilitating easy extension, component swapping, and simplified maintenance as the project evolves. This approach promotes code reusability and makes it easier to integrate new features or adapt to changing requirements.</p>
</li>
<li><p><strong>Streamlit for Rapid Prototyping:</strong> Streamlit's intuitive framework allows for rapid prototyping and development of user interfaces.</p>
</li>
<li><p><strong>Dockerized Deployment:</strong> The project is containerized using Docker, ensuring consistent execution across different environments and simplifying deployment to various platforms.</p>
</li>
<li><p><strong>Extensibility:</strong> The design allows for easy integration with different data sources and future enhancements like personalized responses or multi-modal search.</p>
</li>
<li><p><strong>Cost-Effectiveness:</strong> While not fully optimized for production, cost-effective practices are considered, such as choosing appropriate model sizes and efficient data handling. This helps keep experimentation costs manageable and lays the groundwork for future optimization in a production environment.</p>
</li>
</ul>
<h2 id="heading-architecture">Architecture</h2>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1730724209427/77a131fc-39fe-4229-a8ea-e8ec77047ea3.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-key-components">Key Components</h3>
<ul>
<li><p><strong>Knowledge Base:</strong> Designed to be flexible, can ingest data from various sources, including CSV files, potentially expanding to databases and other formats in future iterations.</p>
</li>
<li><p><strong>Vector Database (ChromaDB):</strong> Stores vector embeddings of the knowledge base generated using a suitable embedding model like <code>amazon.titan-embed-text-v2:0</code> . This allows for efficient similarity searches when retrieving relevant information.</p>
</li>
<li><p><strong>Streamlit:</strong> Provides the user interface for interacting with the chatbot. Streamlit's intuitive API and interactive components make it easy to build a user-friendly interface for querying the knowledge base and visualizing responses.</p>
</li>
<li><p><strong>Amazon Bedrock:</strong> Provides access to the foundation models powering the chatbot. Specifically, <code>amazon.titan-embed-text-v2:0</code> is used for creating text embeddings, while <code>anthropic.claude-3-sonnet-20240229-v1:0</code> is employed for text generation and driving the conversational aspect of the application. Bedrock's serverless infrastructure simplifies deployment and experimentation with these powerful models.</p>
</li>
<li><p><strong>LangChain:</strong> Orchestrates the interaction between the vector database, LLM, and user interface. It streamlines the development process and manages the retrieval and generation workflow.</p>
</li>
</ul>
<h3 id="heading-workflow">Workflow</h3>
<ol>
<li>Data Preprocessing</li>
</ol>
<ol>
<li><p>The user enters a new question in the Streamlit Chat App.</p>
</li>
<li><p>The Chat history is retrieved from memory object and added prior the new message entered.</p>
</li>
<li><p>The question is converted to a vector using Amazon Titan Embeddings, then matched to the closests vector in the database to retireve context.</p>
</li>
<li><p>The combination of new Question received, Chat history, and Context from the Knowledge base are sent to the model.</p>
</li>
<li><p>The model's response is displayed to the user in the StreamLit App.</p>
</li>
</ol>
<h3 id="heading-data-preprocessing">Data Preprocessing</h3>
<p>A critical preprocessing step prepares the data for efficient retrieval and influences the quality of the chatbot's responses. This involves structuring the data and generating embeddings for each piece of information.</p>
<p>The data is organized in a JSON file containing an array of objects. Each object represents a chunk of information and has the following structure, illustrated by this example of <code>Amazon Bedrock Faqs</code> that has been ingested:</p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"id"</span>: <span class="hljs-number">1</span>,
  <span class="hljs-attr">"document"</span>: <span class="hljs-string">"What is Amazon Bedrock?\\nAmazon Bedrock is a fully managed service..."</span>,
  <span class="hljs-attr">"metadata"</span>: {
    <span class="hljs-attr">"topic"</span>: <span class="hljs-string">"bedrock"</span>
  },
  <span class="hljs-attr">"embedding"</span>: [<span class="hljs-number">-0.1018051</span>, <span class="hljs-number">0.01927839</span>, <span class="hljs-number">0.004059858</span>, ...]
}
</code></pre>
<ul>
<li><p><code>"id"</code>: A unique integer ID for each text chunk.</p>
</li>
<li><p><code>"document"</code>: The raw text content of the chunk, which includes questions like "What is Amazon Bedrock?" and their corresponding answers. Notice the <code>\\n</code> indicating newline characters within the text.</p>
</li>
<li><p><code>"metadata"</code>: Currently, metadata includes a "topic" field set to "bedrock", suggesting this data relates to information about Amazon Bedrock. This could be expanded to include other relevant metadata like source or date.</p>
</li>
<li><p><code>"embedding"</code>: A 768-dimensional vector representing the semantic meaning of the "document". These embeddings are pre-calculated using the <code>amazon.titan-embed-text-v2:0</code> model, which supports a flexible embedding dimension size, and are crucial for efficient similarity search within the vector database. Storing them directly in the JSON avoids recalculation during query time, significantly improving performance.</p>
</li>
</ul>
<p>This embedding strategy, coupled with the structured JSON format, optimizes retrieval efficiency and allows the chatbot to generate more relevant and contextually appropriate responses.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This RAG chatbot prototype provides a solid starting point for developers looking to explore and experiment with retrieval augmented generation. By combining Amazon Bedrock, Pinecone, and LangChain, we can build intelligent conversational AI systems that are more grounded and informative. The project demonstrates the potential of RAG for building a new generation of conversational applications.</p>
<h3 id="heading-next-steps">Next Steps</h3>
<p>The mentioned GitHub repository <a target="_blank" href="https://www.google.com/url?sa=E&amp;q=https%3A%2F%2Fgithub.com%2Facriado-dev%2Famazon-bedrock-rag-chatbot">amazon-bedrock-rag-chatbot</a> <a target="_blank" href="https://www.google.com/url?sa=E&amp;q=https%3A%2F%2Fgithub.com%2Facriado-dev%2Famazon-bedrock-rag-chatbot">contains the complete cod</a>e and instructions for running the project in local environment. Looking forward, next feature development could focus on:</p>
<ul>
<li><p><strong>Expanding data ingestion capabilities:</strong> Supporting more diverse data sources and formats.</p>
</li>
<li><p><strong>Improving retrieval accuracy:</strong> Experimenting with different retrieval strategies and ranking algorithms.</p>
</li>
<li><p><strong>Exploring advanced features:</strong> Adding personalization, multi-modal search, and more sophisticated user interfaces.</p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Ingesting F1 Telemetry UDP real-time data in AWS EKS]]></title><description><![CDATA[Introduction
In this post I’ll continue with my series of Real Time data integration in Cloud. This time I’ll explain How I built a real time telemetry data ingestion pipeline using AWS EKS to process UDP data from the F1 2023 Playstation 4 game. The...]]></description><link>https://amatore.dev/ingesting-f1-telemetry-udp-real-time-data-in-aws-eks</link><guid isPermaLink="true">https://amatore.dev/ingesting-f1-telemetry-udp-real-time-data-in-aws-eks</guid><category><![CDATA[EKS]]></category><category><![CDATA[AWS]]></category><category><![CDATA[telemetry]]></category><category><![CDATA[CDK]]></category><category><![CDATA[Kubernetes]]></category><category><![CDATA[formula1]]></category><category><![CDATA[UDP]]></category><category><![CDATA[Cloud Computing]]></category><category><![CDATA[Cloud]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Wed, 16 Oct 2024 11:44:28 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1729079250945/b9dc6f53-6a3a-4a43-b030-563e37bffba2.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>In this post I’ll continue with my series of Real Time data integration in Cloud. This time I’ll explain How I built a real time telemetry data ingestion pipeline using AWS EKS to process UDP data from the F1 2023 Playstation 4 game. The solution efficiently handles the ingestion, processing and visualization of live racing telemetry, all while ensuring scalability, reliability and cost-effectiveness.</p>
<p>One of the reasons because this use case was chosen because F1 car telemetry data is an excellent representation of high-frequency, combined with the complexity of UDP as a protocol make it an ideal scenario to showcase the solution.</p>
<p>As a Cloud Architect, I cannot proceed with the article without remarking that I review and apply the AWS Well Architected Framework in any project but also in my personal projects like this. The major part of the architectural decisions were made according to best practices and design principles of Well Aerchitected Framework. For example, using Spot instances in order to achieve cost efficiency, EKS/Kubernetes for scalability, and Grafana for maintain operational excellence via monitoring.</p>
<h2 id="heading-design-considerations">Design Considerations</h2>
<p>As I’ve commented in the Introduction, the architecture was designed with several WAF key principles in mind:</p>
<ul>
<li><p><strong>Scalability</strong>: The system is capable of scaling to ingest and process real time data <strong>without performance degradation</strong>. AWS EKS and kubernetes provide a level of elasticity that allow to <strong>scale in and out</strong> in terms of Cluster and also more specifically horizontally the pods can leverage <strong>HPA</strong> (<em>Horizontal Pod Austoscaling</em>) in <strong>K8s</strong> (<em>Kubernetes</em>).</p>
</li>
<li><p><strong>Reliability</strong>: A TCP <strong>HealthCheck</strong> sidecar ensures that the UDP listener service remains available, preventing data loss.</p>
</li>
<li><p><strong>Cost Optimization:</strong> Spot instances and ARM-based instances (C6g) were chosen for their <strong>lower cost</strong>, while maintaining the necessary performance.</p>
</li>
<li><p><strong>Performance efficiency:</strong> Low latency and high troughput is required. for that reason, the combination of Spot Instances Network load balancer and A<strong>RM based architecture</strong> is the best suited for high performance network operations.</p>
</li>
<li><p><strong>Security:</strong> A Custom private network, with Load Balancers and security groups fine grained to allow only the specified access/permissions. In addition, AWS IAM policies and EKS Teams ensures to manage access securely.</p>
</li>
</ul>
<h3 id="heading-why-udp">Why UDP ?</h3>
<p>I consider to slightly pause here, prior to go deep in the architecture details, and explain a bit the protocol that we are going to ingest in AWS for this architecture, that is <strong>UDP</strong> (<em>User Datagram Protocol</em>), to handle the telemetry data generated from the F1 2023 game.</p>
<p>UDP is a connectionless protocol, meaning it sends data without establishing a persistent connection between the sender and receiver, unlike <strong>TCP</strong> (<em>Transmission Control Protocol</em>), which maintains a connection and ensures reliable delivery through acknowledgments and retransmissions.</p>
<p>For the scope of this article is interesting to briefly explain and justify why UDP is being used, and not only in this case (<em>F1 2023 game</em>) but also as a standard to send telemetry or sensors/iot data. There are several reasons for that:</p>
<ul>
<li><p><strong>Low latency:</strong> In a fast-paced environment like racing, timing is crucial. UDP allows data to be sent without waiting for ACK from the receiver, unlike TCP, which requires confirmation that each packet has been received successfully. That makes UDP faster by reducing considerably latency.</p>
</li>
<li><p><strong>Real-Time Data:</strong> Telemetry data becomes outdated very quickly, if a packet is lost, it’s often more important to receive the next one as soon as possible rather than retry sending old data.</p>
</li>
<li><p><strong>High Throughput:</strong> UDP can handle a large volume of data without the overhead of managing the connection, errors, or delivering ordered.</p>
</li>
<li><p><strong>Lightweight:</strong> The F1 telemetry is broadcast to multiple clients (such as applications or external systems) UDP is more efficient for multicast communication because don’t require setting up individual connections (as TCP would)</p>
</li>
</ul>
<p>On the other hand, there are a few <strong>drawbacks</strong> that should be taken into account:</p>
<ul>
<li><p><strong>No guaranteed delivery:</strong> As UDP does not provide ACK of received packets, there’s no guarantee that the data will reach the destination.</p>
</li>
<li><p><strong>No Error Connections:</strong> Corrupted data packets may be received without any mechanism to request a retransmission.</p>
</li>
<li><p><strong>Out-of-Order delivery</strong></p>
</li>
<li><p><strong>Lack of congestion and Flow control:</strong> Unlike TCP, UDP doesn’t adjust retransmission rate based on network conditions. The quantity of data sent at once either is controlled, meaing that there’s a risk of overhelming the receiver if the data stream is too. fast and too frequent.</p>
</li>
</ul>
<p>Let’s remark that for the purpose of our project, some of this drawbacks are not handled as we consider them ‘out of the scope’, but, I still consider interesting to take them into account and include them if this architecture is used as a blueprint for a new workload. It’s important to remark, that for the specific case of real-time F1 car telemetry, the drawbacks are mitigated by the fact that data is constantly being generated and not persisted. It’s only being monitored in real-time.</p>
<h2 id="heading-architecture">Architecture</h2>
<p>High-level architecture diagram of the solution:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1728911689891/d40760e5-6d31-43b4-b322-7e699e7ead45.webp" alt class="image--center mx-auto" /></p>
<h3 id="heading-key-components">Key Components</h3>
<ul>
<li><p><strong>PlayStation 4 with F1 2023</strong>: Sends real-time telemetry data via UDP to a specific IP and port. The game allows for the data to be broadcast to the entire network or to a targeted endpoint.</p>
</li>
<li><p><strong>EKS Cluster</strong>: Deployed in the <code>eu-central-1</code> region via AWS CDK (EKS Blueprints). The cluster is composed of spot instances, specifically ARM-based C6g instances.</p>
</li>
<li><p><strong>Network Load Balancer (NLB)</strong>: Exposes the UDP listener service to the public internet, mapping to a static Elastic IP.</p>
</li>
<li><p><strong>Application Load Balancer (ALB)</strong>: Provides secure access to the WebSocket server for external clients and to Grafana for real-time telemetry visualization.</p>
</li>
<li><p><strong>UDP Listener Service</strong>: Receives telemetry data from the PlayStation and processes the UDP packets according to the F1 game’s telemetry specification. A sidecar service (Nginx) performs TCP health checks to ensure availability.</p>
</li>
<li><p><strong>WebSocket Server</strong>: Publishes the telemetry data to connected clients. For this project, only speed data is transmitted.</p>
</li>
<li><p><strong>Grafana OSS</strong>: Configured with a WebSocket plugin to allow real-time visualization of telemetry data, enabling users to monitor key metrics like vehicle speed.</p>
</li>
</ul>
<h3 id="heading-workflow">Workflow</h3>
<p>The diagram has been labeled with the main workflow process for telemetry ingestion. The different steps have been brifly explained next:</p>
<ol>
<li><p>The PlayStation 4 with F1 2023 sends UDP telemetry data to the Elastic IP associated with the Network Load Balancer.</p>
</li>
<li><p>The NLB forwards this data to the UDP listener service running on AWS EKS.</p>
</li>
<li><p>A sidecar TCP health check ensures the UDP listener's availability by monitoring the service.</p>
</li>
<li><p>The UDP listener processes the telemetry data and forwards it to the WebSocket server.</p>
</li>
<li><p>The WebSocket server broadcasts the telemetry data to connected clients.</p>
</li>
<li><p>Grafana, connected via the WebSocket plugin, visualizes the data, providing real-time insights into the car's speed.</p>
</li>
</ol>
<h2 id="heading-demo-showcase">Demo / Showcase</h2>
<p>After deploying the project in my AWS personal account I’ve recorded myself playing a short Race in the official Formula1 <strong>Circuit de Barcelona-Catalonia</strong> . Obviously is my favourite circuit, as I was born a few kilomentres away from it.  </p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1729078503349/9b3db575-247f-4b5c-947b-81ce1592803b.jpeg" alt class="image--center mx-auto" /></p>
<p>Next we can see the recorded game and, to monitor the telemetry generated, the Logs of Websocket server generated in real-time and the grafana dashboard with speed gauge configured:  </p>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=ILYCQ5PZYwk">https://www.youtube.com/watch?v=ILYCQ5PZYwk</a></div>
<p> </p>
<h2 id="heading-ia"> </h2>
<p>Conclusion</p>
<p>This architecture demonstrates how AWS EKS, in combination with other services, can build a scalable, reliable, and cost effective solution for ingestion UDP data. In my opinion, this open source project serves well as a blueprint or template for integrating UDP-based real-time communication into the cloud. The workload is simple and cover only one use case (Car’s speed), but can be considered as a starting point, and can be adapted for other high frequency data streaming use cases, making it flexible and scalable template.<br />From an operational perspective, I want to mention that I’ve used Infrastructure as code (IaC) by defining the source code of the recources through CDK and the Kubernetes manifests via kustomize and ArgoCD for easy gitops. This ensures a repeatable ans maintanable process for scalins and evolving the olution over time.<br />For local testing, I’ve used Minikube, in order to emulate Kubernetes, that allowed me to test the deployments before to rolling them out to AWS, ensuring a smooth integration with the target environment (develop for this project)</p>
<h2 id="heading-next-steps">Next Steps</h2>
<ul>
<li>The full code of this solution is available in my GitHub repository:</li>
</ul>
<p><a target="_blank" href="https://github.com/acriado-dev/aws-eks-udp-telemetry">https://github.com/acriado-dev/aws-eks-udp-telemetry</a></p>
<ul>
<li>Future improvements could include processing all Car’s telemetry or adding data analytics for the data ingested. I’m always openned to add some functionality in the project.</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Connecting an Arduino MKR WiFi 1010 to AWS IoT Core]]></title><description><![CDATA[Introduction
Internet of Things (IoT) has revolutionized how we interact with the world. It involves connecting everyday objects to the internet, enabling them to collect, send, and receive data. This connectivity allows for smarter decision-making, ...]]></description><link>https://amatore.dev/connecting-an-arduino-mkr-wifi-1010-to-aws-iot-core</link><guid isPermaLink="true">https://amatore.dev/connecting-an-arduino-mkr-wifi-1010-to-aws-iot-core</guid><category><![CDATA[arduino]]></category><category><![CDATA[platformio]]></category><category><![CDATA[AWS]]></category><category><![CDATA[aws iot core]]></category><category><![CDATA[mqtt]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Thu, 05 Sep 2024 07:07:41 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1725356051563/3bd8c460-be05-4787-b29a-94ec8c00a3e3.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p><strong>Internet of Things (IoT)</strong> has revolutionized how we interact with the world. It involves connecting everyday objects to the internet, enabling them to collect, send, and receive data. This connectivity allows for smarter decision-making, automation, and a new level of interactivity between the physical and digital worlds.</p>
<p>As a Cloud Architect, I’ve had the opportunity to work on several IoT cloud projects where the data from connected devices was processed centrally in AWS. These projects involved analyzing the data to drive critical business logic decisions, leading to improved efficiency, enhanced user experiences, and even the creation of new business models.</p>
<p>In this article, I’ll guide you through a step-by-step process of connecting an Arduino MKR1010 to AWS IoT Core. This tutorial is designed to help you understand how to leverage AWS’s powerful cloud services to manage and analyze IoT data, helping you to get started the way for your own innovative projects.</p>
<h2 id="heading-what-is-arduino-mkr-wifi-1010">What is Arduino MKR Wifi 1010?</h2>
<p>The Arduino MKR WiFi 1010 is the easiest point of entry to basic IoT and pico-network application design. Whether you are looking at building a sensor network connected to your office or home router, or if you want to create a Bluetooth® Low Energy device sending data to a cellphone, the MKR WiFi 1010 is your one-stop-solution for many of the basic IoT application scenarios.</p>
<p>The board is based on the SAMD21 microcontroller, which is a low-power ARM Cortex-M0+ processor, and includes an integrated u-blox NINA-W102 module for WiFi and Bluetooth connectivity.</p>
<h2 id="heading-development-environment">Development Environment</h2>
<h3 id="heading-prerequisites">Prerequisites</h3>
<ol>
<li><strong>AWS IoT Account</strong></li>
</ol>
<ul>
<li><p><strong>Sign up for an AWS account</strong> if you don’t already have one: <a target="_blank" href="https://aws.amazon.com/free/?gclid=Cj0KCQjwiuC2BhDSARIsALOVfBLGx6RSgt3CRJTGFzHIL19H0odXhHVlLTI3JtJKtSdUZsqi5-YAl-IaAl_aEALw_wcB&amp;trk=349e66be-cf8d-4106-ae2c-54262fc45524&amp;sc_channel=ps&amp;ef_id=Cj0KCQjwiuC2BhDSARIsALOVfBLGx6RSgt3CRJTGFzHIL19H0odXhHVlLTI3JtJKtSdUZsqi5-YAl-IaAl_aEALw_wcB:G:s&amp;s_kwcid=AL!4422!3!455709741750!e!!g!!aws%20sign%20up!10817378576!108173614482&amp;all-free-tier.sort-by=item.additionalFields.SortRank&amp;all-free-tier.sort-order=asc&amp;awsf.Free%20Tier%20Types=*all&amp;awsf.Free%20Tier%20Categories=*all">AWS Sign-Up</a></p>
</li>
<li><p>Select a predefined AWS region, ideally one closer to your current location, in our case will be <strong>Paris (eu-west-3)</strong></p>
</li>
</ul>
<blockquote>
<p><strong>Caution</strong>: not all regions support AWS IoT</p>
</blockquote>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725439859730/88aba229-0441-4580-bf70-71299546021e.png" alt class="image--center mx-auto" /></p>
<ol start="2">
<li><strong>Platform.io</strong></li>
</ol>
<p>I've selected Platform.io as development environment for Arduino projects because it offers enhanced features compared to the traditional Arduino IDE. It integrates seamlessly with Visual Studio Code, providing advanced code editing tools, debugging capabilities, and project management features</p>
<ul>
<li><p><a target="_blank" href="https://code.visualstudio.com/download">Install Visual Studio Code</a></p>
</li>
<li><p><a target="_blank" href="https://platformio.org/install/ide?install=vscode">Install Platform.io for VSCode</a></p>
</li>
</ul>
<p><img src="https://cdn.platformio.org/images/platformio-ide-vscode-pkg-installer.4463251e.png" alt="VSCode Extensions Manager and PlatformIO IDE auto-installer" /></p>
<ul>
<li>I recommend to follow the Platform.io <a target="_blank" href="https://docs.platformio.org/page/ide/vscode.html#quick-start">Quick start guide</a></li>
</ul>
<ol start="3">
<li><strong>Arduino MKR1010 and USB cable</strong></li>
</ol>
<ul>
<li><p>Arduino MKR WiFi 1010 board.</p>
</li>
<li><p>Micro-USB to USB cable to connect the MKR1010 to your computer.</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725440902382/e5ddb121-2620-4688-9818-73073b223bda.png" alt class="image--center mx-auto" /></p>
<h2 id="heading-configuring-the-board">Configuring the Board</h2>
<p>In order to securely connect the Board to AWS there is a mandatory step that we have to follow. We have to create an sketch to generate a CSR for a private key generated in an <code>ECC508/ECC608</code> crypto chip slot. Once the skecth is uploaded to the board, the user is prompted for the following information that is contained in the generated CSR:</p>
<ul>
<li><p><em>country</em></p>
</li>
<li><p><em>state or province</em></p>
</li>
<li><p><em>locality</em></p>
</li>
<li><p><em>organization</em></p>
</li>
<li><p><em>organizational unit</em></p>
</li>
<li><p><strong>common name</strong></p>
</li>
</ul>
<p>We can consider optional all fields except <code>common name</code>. After providing it, the user can also select a slot number to use for the private key.</p>
<p>Let's define the steps to generate the CSR:</p>
<ul>
<li><p>Open Platform.io and ensure that the Board is available</p>
</li>
<li><p>Create new Project named <code>ArduinoCSRConfig</code></p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/qQl8o6tGX9I">https://youtu.be/qQl8o6tGX9I</a></div>
<p> </p>
<ul>
<li><p>Install <code>ArduinoECCX08</code> library for this Project</p>
</li>
<li><p>In <code>/src</code> folder, modify <code>main.cpp</code> to include the sketch required, should be like:</p>
</li>
</ul>
<pre><code class="lang-c"><span class="hljs-comment">/*
  ArduinoECCX08 - CSR (Certificate Signing Request
  The circuit:

  - Arduino MKR board equipped with ECC508 or ECC608 chip

  This example code is in the public domain.
*/</span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;Arduino.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;ArduinoECCX08.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;utility/ECCX08CSR.h&gt;</span></span>
<span class="hljs-meta">#<span class="hljs-meta-keyword">include</span> <span class="hljs-meta-string">&lt;utility/ECCX08DefaultTLSConfig.h&gt;</span></span>

<span class="hljs-function">String <span class="hljs-title">readLine</span><span class="hljs-params">()</span> </span>{
  String line;

  <span class="hljs-keyword">while</span> (<span class="hljs-number">1</span>) {
    <span class="hljs-keyword">if</span> (Serial.available()) {
      <span class="hljs-keyword">char</span> c = Serial.read();

      <span class="hljs-keyword">if</span> (c == <span class="hljs-string">'\r'</span>) {
        <span class="hljs-comment">// ignore</span>
        <span class="hljs-keyword">continue</span>;
      } <span class="hljs-keyword">else</span> <span class="hljs-keyword">if</span> (c == <span class="hljs-string">'\n'</span>) {
        <span class="hljs-keyword">break</span>;
      }

      line += c;
    }
  }

  <span class="hljs-keyword">return</span> line;
}

<span class="hljs-function">String <span class="hljs-title">promptAndReadLine</span><span class="hljs-params">(<span class="hljs-keyword">const</span> <span class="hljs-keyword">char</span>* prompt, <span class="hljs-keyword">const</span> <span class="hljs-keyword">char</span>* defaultValue)</span> </span>{
  Serial.print(prompt);
  Serial.print(<span class="hljs-string">" ["</span>);
  Serial.print(defaultValue);
  Serial.print(<span class="hljs-string">"]: "</span>);

  String s = readLine();

  <span class="hljs-keyword">if</span> (s.length() == <span class="hljs-number">0</span>) {
    s = defaultValue;
  }

  Serial.println(s);

  <span class="hljs-keyword">return</span> s;
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">setup</span><span class="hljs-params">()</span> </span>{
  Serial.begin(<span class="hljs-number">9600</span>);
  <span class="hljs-keyword">while</span> (!Serial);

  <span class="hljs-keyword">if</span> (!ECCX08.begin()) {
    Serial.println(<span class="hljs-string">"No ECCX08 present!"</span>);
    <span class="hljs-keyword">while</span> (<span class="hljs-number">1</span>);
  }

  String serialNumber = ECCX08.serialNumber();

  Serial.print(<span class="hljs-string">"ECCX08 Serial Number = "</span>);
  Serial.println(serialNumber);
  Serial.println();

  <span class="hljs-keyword">if</span> (!ECCX08.locked()) {
    String lock = promptAndReadLine(<span class="hljs-string">"The ECCX08 on your board is not locked, would you like to PERMANENTLY configure and lock it now? (y/N)"</span>, <span class="hljs-string">"N"</span>);
    lock.toLowerCase();

    <span class="hljs-keyword">if</span> (!lock.startsWith(<span class="hljs-string">"y"</span>)) {
      Serial.println(<span class="hljs-string">"Unfortunately you can't proceed without locking it :("</span>);
      <span class="hljs-keyword">while</span> (<span class="hljs-number">1</span>);
    }

    <span class="hljs-keyword">if</span> (!ECCX08.writeConfiguration(ECCX08_DEFAULT_TLS_CONFIG)) {
      Serial.println(<span class="hljs-string">"Writing ECCX08 configuration failed!"</span>);
      <span class="hljs-keyword">while</span> (<span class="hljs-number">1</span>);
    }

    <span class="hljs-keyword">if</span> (!ECCX08.lock()) {
      Serial.println(<span class="hljs-string">"Locking ECCX08 configuration failed!"</span>);
      <span class="hljs-keyword">while</span> (<span class="hljs-number">1</span>);
    }

    Serial.println(<span class="hljs-string">"ECCX08 locked successfully"</span>);
    Serial.println();
  }

  Serial.println(<span class="hljs-string">"Hi there, in order to generate a new CSR for your board, we'll need the following information ..."</span>);
  Serial.println();

  String country            = promptAndReadLine(<span class="hljs-string">"Country Name (2 letter code)"</span>, <span class="hljs-string">""</span>);
  String stateOrProvince    = promptAndReadLine(<span class="hljs-string">"State or Province Name (full name)"</span>, <span class="hljs-string">""</span>);
  String locality           = promptAndReadLine(<span class="hljs-string">"Locality Name (eg, city)"</span>, <span class="hljs-string">""</span>);
  String organization       = promptAndReadLine(<span class="hljs-string">"Organization Name (eg, company)"</span>, <span class="hljs-string">""</span>);
  String organizationalUnit = promptAndReadLine(<span class="hljs-string">"Organizational Unit Name (eg, section)"</span>, <span class="hljs-string">""</span>);
  String common             = promptAndReadLine(<span class="hljs-string">"Common Name (e.g. server FQDN or YOUR name)"</span>, serialNumber.c_str());
  String slot               = promptAndReadLine(<span class="hljs-string">"What slot would you like to use? (0 - 4)"</span>, <span class="hljs-string">"0"</span>);
  String generateNewKey     = promptAndReadLine(<span class="hljs-string">"Would you like to generate a new private key? (Y/n)"</span>, <span class="hljs-string">"Y"</span>);

  Serial.println();

  generateNewKey.toLowerCase();

  <span class="hljs-keyword">if</span> (!ECCX08CSR.begin(slot.toInt(), generateNewKey.startsWith(<span class="hljs-string">"y"</span>))) {
    Serial.println(<span class="hljs-string">"Error starting CSR generation!"</span>);
    <span class="hljs-keyword">while</span> (<span class="hljs-number">1</span>);
  }

  ECCX08CSR.setCountryName(country);
  ECCX08CSR.setStateProvinceName(stateOrProvince);
  ECCX08CSR.setLocalityName(locality);
  ECCX08CSR.setOrganizationName(organization);
  ECCX08CSR.setOrganizationalUnitName(organizationalUnit);
  ECCX08CSR.setCommonName(common);

  String csr = ECCX08CSR.end();

  <span class="hljs-keyword">if</span> (!csr) {
    Serial.println(<span class="hljs-string">"Error generating CSR!"</span>);
    <span class="hljs-keyword">while</span> (<span class="hljs-number">1</span>);
  }

  Serial.println(<span class="hljs-string">"Here's your CSR, enjoy!"</span>);
  Serial.println();
  Serial.println(csr);
}

<span class="hljs-function"><span class="hljs-keyword">void</span> <span class="hljs-title">loop</span><span class="hljs-params">()</span> </span>{
  <span class="hljs-comment">// do nothing</span>
}
</code></pre>
<ul>
<li>Click <code>Upload and monitor</code> and, in order to generate a new CSR, define only the <code>common name</code> with the value <code>ArduinoMKR1010</code></li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/q24NfnX8xdc">https://youtu.be/q24NfnX8xdc</a></div>
<p> </p>
<ul>
<li>Copy the generated CSR text including <code>"-----BEGIN CERTIFICATE REQUEST-----"</code> and <code>"-----END CERTIFICATE REQUEST-----"</code> and save it into a <code>.csr</code> file.</li>
</ul>
<p>Finally we have a CSR to identify the board and be able to communicate with AWs IoT core securely.</p>
<blockquote>
<p><strong><em>NOTE: This locking process is permanent and cannot be undone, but it is necessary to use the crypto element. The configuration set by the sketch allows the use of 5 private key slots with any Cloud provider (or server). A CSR can be regenerated at any time for each of the other four slots.</em></strong></p>
</blockquote>
<h2 id="heading-connecting-to-aws-iot-core">Connecting to AWS IoT Core</h2>
<p>Once the Board is properly initialized and the CSR configured, we can proceed with the tutorial and connect the device to IoT core using MQTT protocol. AWS requires to use X.509 certificates for authentication. Let's follow the steps:</p>
<ol>
<li><strong>Create Arduino Board as AWS IoT Thing</strong></li>
</ol>
<ul>
<li><p>Access to your AWS account in the predefined region. For the purpose of our example, it is <strong>Paris</strong>(<code>eu-west-3</code>)</p>
</li>
<li><p>To create the Board in AWS IoT as a Thing we will use the same name that we used for the common name in the CSR configuration: <code>ArduinoMKR1010</code></p>
</li>
<li><p>By the way, we are going to skip the CSR certificate Upload for the Board</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/znkutDTnHi4">https://youtu.be/znkutDTnHi4</a></div>
<p> </p>
<ol start="2">
<li><strong>Create a project in Platform.io environment</strong></li>
</ol>
<ul>
<li><p>Create e new project in Platform.io, for our example we have named it <code>Hello World - MKR wifi 101</code></p>
</li>
<li><p>Install the required libraries:</p>
<ul>
<li><p><em>WiFiNINA</em></p>
</li>
<li><p><em>ArduinoECCX08</em></p>
</li>
<li><p><em>ArduinoBearSSL</em></p>
</li>
<li><p><em>ArduinoMqttClient</em></p>
</li>
</ul>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/XCCw564O2tw">https://youtu.be/XCCw564O2tw</a></div>
<p> </p>
<ul>
<li>As we have two projects in our VsCode workspace, I recommend to set the new created project as a default:</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/kIDt-tl814A">https://youtu.be/kIDt-tl814A</a></div>
<p> </p>
<ol start="3">
<li><strong>Create the Certificate and attach it to the Thing</strong></li>
</ol>
<ul>
<li><p>Move the <code>.csr</code> file generated in previous steps to a more appropiate folder inside this project, for example into a <code>/resources</code> folder.</p>
</li>
<li><p>Create the certificate in AWS IoT by updating the csr generated.</p>
</li>
<li><p>Download the generated <code>.pem.crt</code> certificate file generated in AWS and save it in the <code>/resources</code> folder. We will need this file in the Board in order to communicate with cloud.</p>
</li>
<li><p>Ensure that the certificate is activated and Attached to the Thing created:</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://www.youtube.com/watch?v=MFJ_CgtAJwg">https://www.youtube.com/watch?v=MFJ_CgtAJwg</a></div>
<p> </p>
<ol start="4">
<li><strong>Create a Policy</strong></li>
</ol>
<ul>
<li><p>For the scope of this example, we are going ot create an open policy, that will allow all iot actions on AWS.</p>
</li>
<li><p>For real life projects, I strongly recommend to create a strict policy and follow the least principle privilege.</p>
</li>
<li><p>Once the policy is created, it should b attached to the Certificate created in the previous step.</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/Vcs63fZLGZk">https://youtu.be/Vcs63fZLGZk</a></div>
<p> </p>
<ol start="5">
<li><strong>Implement the Connecting sketch to AWS IoT Core</strong></li>
</ol>
<ul>
<li><p>In the project <code>Hello World - MKR wifi 101</code>, modify the file <code>src/main.cpp</code>and add the following code:</p>
</li>
<li><p>```cpp
  /*
    AWS IoT WiFi</p>
<p>    This sketch securely connects to an AWS IoT using MQTT over WiFi.
    It uses a private key stored in the ATECC508A and a public
    certificate for SSL/TLS authetication.</p>
<p>    It publishes a message every second to arduino/outgoing
    topic and subscribes to messages on the arduino/incoming
    topic.</p>
<p>    The circuit:</p>
<ul>
<li><p>Arduino MKR WiFi 1010 or MKR1000</p>
<p>*/
#include 
#include 
#include 
#include 
#include  // change to #include  for MKR1000</p>
<p>#include "arduino_secrets.h"</p>
<p>/////// Enter your sensitive data in arduino_secrets.h
const char ssid[]        = SECRET_SSID;
const char pass[]        = SECRET_PASS;
const char broker[]      = SECRET_BROKER;
const char* certificate  = SECRET_CERTIFICATE;</p>
<p>WiFiClient    wifiClient;            // Used for the TCP socket connection
BearSSLClient sslClient(wifiClient); // Used for SSL/TLS connection, integrates with ECC508
MqttClient    mqttClient(sslClient);</p>
<p>unsigned long lastMillis = 0;</p>
<p>unsigned long getTime() {
// get the current time from the WiFi module<br />return WiFi.getTime();
}</p>
</li>
</ul>
</li>
</ul>
<p>    void connectWiFi() {
      Serial.print("Attempting to connect to the SSID: ");
      Serial.print(ssid);
      Serial.print(" ");</p>
<p>    bool connected = false;
      int retryCount = 0;
      const int maxRetries = 10; </p>
<p>    while (!connected &amp;&amp; retryCount&lt; maxRetries) {
      if (WiFi.begin(ssid, pass) == WL_CONNECTED) {
        connected = true;
      } else {
            // failed, retry
        Serial.print(".");
        delay(1000);
        retryCount++;
      }
    }</p>
<p>      if (connected) {
        Serial.println();
        Serial.println("You're connected to the network");
        Serial.println();
      } else {
        Serial.println();
        Serial.println("Failed to connect to the network after multiple attempts.");
      }</p>
<p>    }</p>
<p>    void connectMQTT() {
      Serial.print("Attempting to MQTT broker: ");
      Serial.print(broker);
      Serial.println(" ");</p>
<p>      while (!mqttClient.connect(broker, 8883)) {
        // failed, retry
        Serial.print(".");
        delay(3000);
      }
      Serial.println();</p>
<p>      Serial.println("You're connected to the MQTT broker");
      Serial.println();</p>
<p>      // subscribe to a topic
      mqttClient.subscribe("arduino/incoming");
    }</p>
<p>    void publishMessage() {
      Serial.println("Publishing message");</p>
<p>      // send message, the Print interface can be used to set the message contents
      mqttClient.beginMessage("arduino/outgoing");
      mqttClient.print("{\"deviceModel\": \"MKR 1010\" ");
      mqttClient.print(",\"temperature\": "  );
      mqttClient.print(random(15, 30));
      mqttClient.print(",\"humidity\": ");
      mqttClient.print(random(1, 500));
      mqttClient.print(",\"vehicleId\": \"MKR1010-1");
      mqttClient.print("\"}");
      mqttClient.endMessage();
    }</p>
<p>    void onMessageReceived(int messageSize) {
      // we received a message, print out the topic and contents
      Serial.print("Received a message with topic '");
      Serial.print(mqttClient.messageTopic());
      Serial.print("', length ");
      Serial.print(messageSize);
      Serial.println(" bytes:");</p>
<p>      // use the Stream interface to print the contents
      while (mqttClient.available()) {
        Serial.print((char)mqttClient.read());
      }
      Serial.println();</p>
<p>      Serial.println();
    }</p>
<p>    void setup() {
      Serial.begin(115200);
      while (!Serial);</p>
<p>      if (!ECCX08.begin()) {
        Serial.println("No ECCX08 present!");
        while (1);
      }</p>
<p>      // Set a callback to get the current time
      // used to validate the servers certificate
      ArduinoBearSSL.onGetTime(getTime);</p>
<p>      // Set the ECCX08 slot to use for the private key
      // and the accompanying public certificate for it
      sslClient.setEccSlot(0, certificate);</p>
<p>      // Optional, set the client id used for MQTT,
      // each device that is connected to the broker
      // must have a unique client id. The MQTTClient will generate
      // a client id for you based on the millis() value if not set
      //
      // mqttClient.setId("clientId");</p>
<p>      // Set the message callback, this function is
      // called when the MQTTClient receives a message
      mqttClient.onMessage(onMessageReceived);
    }</p>
<p>    void loop() {</p>
<p>      if (WiFi.status() != WL_CONNECTED) {
        int numNetworks = WiFi.scanNetworks();
        Serial.println("Discovered " + String(numNetworks) + " Networks");
        connectWiFi();
      }</p>
<p>      if (!mqttClient.connected()) {
        // MQTT client is disconnected, connect
        connectMQTT();
      }</p>
<p>      // poll for new MQTT messages and send keep alives
      mqttClient.poll();</p>
<p>      // publish a message roughly every 1 seconds.
      if (millis() - lastMillis &gt; 1000) {
        lastMillis = millis();</p>
<p>        publishMessage();
      }
    }</p>
<pre><code>

* Create <span class="hljs-keyword">in</span> <span class="hljs-string">`/include`</span> folder a file <span class="hljs-keyword">for</span> the secrets named <span class="hljs-string">`arduino_secrets.h`</span> that has already been included <span class="hljs-keyword">in</span> the main sketch file.

* <span class="hljs-string">`arduino_secrets.h`</span> will contain the secrets and variables required to connect, first to the Wifi and the to the Cloud provider, <span class="hljs-keyword">for</span> example:


<span class="hljs-string">``</span><span class="hljs-string">`c
#define SECRET_SSID "MIWIFI_SSID"
#define SECRET_PASS "XXXXXXXXXXXX"

#define SECRET_BROKER "a21nf7ui6d7k9-ats.iot.eu-central-1.amazonaws.com"

const char SECRET_CERTIFICATE[] = R"(
-----BEGIN CERTIFICATE-----
MIIClTCCAX2gAwIBAgIVAMtpt+I79Uj24rlM4MVB4Q2mqffdMA0GCSqGSIb3DQEB
CwUAME0xSzBJBgNVBAsMQkFtYXpvbiBXZWIgU2VydmljZXMgTz1BbWF6b24uY29t
IEluYy4gTD1TZWF0dGxlIFNUPVdhc2hpbmd0b24gQz1VUzAeFw0yNDA5MDIxNDA3
MDJaFw00OTEyMzEyMzU5NTlaMCQxCjAIBgNVBAYTASAxFjAUBgNVBAMTDU15TUtS
V2lGaTEwMTAwWTATBgcqhkjOPQIBBggqhkjOPQMBBwNCAARo/XaUyfCffmCSIyl1
x1Tc4pbi4d1gu+IzOa98Fl52HaxxroFkDrKCiC6cyaK653BFXkRpWcQ7ivmNouO7
Fz06o2AwXjAfBgNVHSMEGDAWgBTwrTWex0zOILY7T5K6neYroFcHYDAdBgNVHQ4E
FgQUiqwBn2n3ixwwL4E8YI5d4b4eWdMwDAYDVR0TAQH/BAIwADAOBgNVHQ8BAf8E
BAMCB4AwDQYJKoZIhvcNAQELBQADggEBAK8iH5HpmGaxw9XQh/zyPsKxGG/dXXWr
U/mbZH6HGnf/Ux3ULnyZElBOB99KgqCutu8+k8lgpR8gLtsfC3XpL+TwIG4UHVak
fqP//CFl+n0DRWZvqw2wyvKBVyP1d9WUcTAW7x1k5ZN3AeXDT1Iy/KIkxhQXHIQi
wA1nw+YPDafHNFAviLKrUYd+Sx9hpKLDLI17609HbeY/9dsZh8MvxUmgjod7jvIy
0EGOgf8EhqyDvTVMb18ZUT0LKCwDWpMug0fZ+XK0kHA98Th0ndkE3997MqJSuY2u
tPND16KqhQPWim/akbJb1yDmP0M5xzF9Vtoh9Tv8JLxynmgc0KDq5/g=
-----END CERTIFICATE-----
)";</span>
</code></pre><ul>
<li><p>Replace the secrets for the correct ones, specially the <code>SECRET_CERTIFICATE</code>, that should be the one that is contained in the certificate file that was downloaded from AWS in a previous step in <code>resources</code> folder.</p>
</li>
<li><p>To replace the <code>SECRET_BROKER</code>, the endpoint should be obtained from AWS:</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/cEjnh_aSyBs">https://youtu.be/cEjnh_aSyBs</a></div>
<p> </p>
<ol start="6">
<li><strong>Connecting to AWS IoT</strong></li>
</ol>
<ul>
<li>Once the project code is properly added to <code>Hello World - MKR wifi 101</code> , Ensure that the Board is connected through USB and that the device is available in PIO HOME Devices:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1725516683276/a81881e0-152c-4365-b6fe-ee1dca1b130c.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>There are several Project Tasks availables. We need to Build the sketch code and upload it to the Board connected.</p>
</li>
<li><p>I recommend to use the following tasks:</p>
<ul>
<li><p><code>Full Clean</code></p>
</li>
<li><p><code>Build</code></p>
</li>
<li><p><code>Upload and Monitor</code></p>
</li>
</ul>
</li>
<li><p>Finally, if all steps have worked successfully, a Serial monitor terminal is openned and the Board is going to Connect to the Wifi and send messages to AWS IoT</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/l3WudC4WBDo">https://youtu.be/l3WudC4WBDo</a></div>
<p> </p>
<h3 id="heading-testing-and-monitoring-data">Testing and Monitoring Data</h3>
<p>The board is continously sending a MQTT message to IoT core with random generated data every second with this composition, see example:</p>
<pre><code class="lang-json"> {
  <span class="hljs-attr">"deviceModel"</span>: <span class="hljs-string">"MKR 1010"</span>,
  <span class="hljs-attr">"temperature"</span>: <span class="hljs-number">22</span>,
  <span class="hljs-attr">"humidity"</span>: <span class="hljs-number">317</span>,
  <span class="hljs-attr">"vehicleId"</span>: <span class="hljs-string">"MKR1010-1"</span>
}
</code></pre>
<ol>
<li><p><strong> Monitor Outgoing Data from the Board</strong></p>
</li>
<li><p>We have to use the MQTT test client to Subscribe to the topic <code>arduino/outgoing</code></p>
</li>
</ol>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/2WFmEt6v3dQ">https://youtu.be/2WFmEt6v3dQ</a></div>
<p> </p>
<ol start="2">
<li><strong>Send a Message to the Board</strong></li>
</ol>
<ul>
<li><p>We should subscribe to the topic <code>arduino/incoming</code> and publis a message from the MQTT Test client in AWS IoT</p>
</li>
<li><p>The message sent should appear in the Serial console that we have openned in Platform.io</p>
</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/QZuctmxwCCw">https://youtu.be/QZuctmxwCCw</a></div>
<p> </p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In this guide, we’ve walked through the process of connecting the Arduino MKR WiFi 1010 to AWS IoT Core using Platform.io in Visual Studio Code. You’ve learned how to set up your development environment, configure AWS IoT Core, and establish communication between your Arduino board and the cloud using the MQTT protocol.</p>
<p>Although this is a specific project, it can be considered the first step toward building scalable IoT solutions. In my opinion, Arduino is one of the best platforms for growing in the IoT space and for enjoying the process of building personal projects. It’s also a viable option for production systems due to its flexibility and ecosystem. Additionally, using Platform.io enhances local development capabilities and simplifies the integration of new tasks, making it an excellent tool for both hobbyists and professionals.</p>
<p>As a next step, I plan to extend this project by creating an AWS IoT Rule that processes the data coming from the Arduino MKR1010 and integrates it with a real-time platform. This will involve uploading the data from the incoming MQTT topic directly to DynamoDB.</p>
<p>I’ll be covering this in my upcoming articles, where I’ll explain the integration in more detail. You can also check out my previous article on building a real-time platform using AWS services <a target="_blank" href="https://amatore.dev/building-a-real-time-data-integration-platform-on-aws">here</a>, which will serve as a foundation for the next phase of this project.</p>
<h2 id="heading-reference">Reference</h2>
<ol>
<li>Github Project:</li>
</ol>
<p><a target="_blank" href="https://github.com/acriado-dev/arduino-mkr-1010-aws-iot-core">https://github.com/acriado-dev/arduino-mkr-1010-aws-iot-core</a></p>
<ol start="2">
<li>Arduino MKR 1010 Official Docs:</li>
</ol>
<p><a target="_blank" href="https://store.arduino.cc/en-es/products/arduino-mkr-wifi-1010?gad_source=1&amp;gclid=CjwKCAjwreW2BhBhEiwAavLwfFKUmllSUvjuY6mNsFugNXUCh-aoR3kiqFvGyX0dToSSzvty08dyBhoCE28QAvD_BwE">https://store.arduino.cc/en-es/products/arduino-mkr-wifi-1010?gad_source=1&amp;gclid=CjwKCAjwreW2BhBhEiwAavLwfFKUmllSUvjuY6mNsFugNXUCh-aoR3kiqFvGyX0dToSSzvty08dyBhoCE28QAvD_BwE</a></p>
<ol start="3">
<li>Platform.io</li>
</ol>
<p><a target="_blank" href="https://docs.platformio.org/en/latest/what-is-platformio.html">https://docs.platformio.org/en/latest/what-is-platformio.html</a></p>
<ol start="4">
<li>AWS IoT core Developer guide</li>
</ol>
<p><a target="_blank" href="https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html">https://docs.aws.amazon.com/iot/latest/developerguide/what-is-aws-iot.html</a></p>
]]></content:encoded></item><item><title><![CDATA[Building a "Real-Time" Data Integration Platform on AWS]]></title><description><![CDATA[Introduction
During my work on several projects as a Cloud Architect, I noticed a recurring challenge:

Integrating diverse data sources and delivering real-time updates to multiple clients

Whether it's monitoring environmental conditions in smart a...]]></description><link>https://amatore.dev/building-a-real-time-data-integration-platform-on-aws</link><guid isPermaLink="true">https://amatore.dev/building-a-real-time-data-integration-platform-on-aws</guid><category><![CDATA[realtime]]></category><category><![CDATA[CDK]]></category><category><![CDATA[AWS]]></category><category><![CDATA[serverless]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Tue, 03 Sep 2024 07:37:48 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1724843469291/29b63e97-f014-4bf5-9183-4d7e29389164.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>During my work on several projects as a Cloud Architect, I noticed a recurring challenge:</p>
<blockquote>
<p>Integrating diverse data sources and delivering real-time updates to multiple clients</p>
</blockquote>
<p>Whether it's monitoring environmental conditions in smart agriculture, tracking assets in logistics, or managing connected devices in smart cities, the ability to process and act on data instantly can make a significant difference.</p>
<p>In this post, I'll share my journey of building a serverless Near Real-Time Data Integration Platform using the AWS Serverless Application Model (SAM).</p>
<p>All the infrastructure code is available in a GitHub repository:</p>
<p><a target="_blank" href="https://github.com/acriado-dev/sam-real-time-websockets">https://github.com/acriado-dev/sam-real-time-websockets</a></p>
<h2 id="heading-overview">Overview</h2>
<h3 id="heading-understanding-real-time-vs-near-real-time">Understanding Real-Time vs Near Real Time</h3>
<p>Before diving into the details and architectural components, it's important to clarify a core concept for the platform that sometimes causes confusion. In the context of data processing, what is the difference between "real-time" and "near real-time"?</p>
<ul>
<li><p><strong>Real-Time:</strong> Data is processed and delivered as soon as it is generated, with virtually no delay. This is critical in scenarios where immediate action is required. Examples include industrial automation, connected vehicles, and healthcare monitoring.</p>
</li>
<li><p><strong>Near Real-Time:</strong> This refers to a slight delay in data processing, typically ranging from milliseconds to seconds. Examples include data analytics, CDNs, and social media monitoring.</p>
</li>
</ul>
<h3 id="heading-about-this-project">About this Project</h3>
<p>To answer the question,</p>
<blockquote>
<p><strong><em>is this platform Real-time or Near Real-Time?</em></strong></p>
</blockquote>
<p>We have to take into account that <strong>TRUE</strong> Real-Time implies that data is processed and delivered with minimal latency, often <em>microsecons or milliseconds</em>, with strict timing constraints. This level of immediacy is not accomplished with the Cloud resources used for thi project implementation, let's analyze the most significant ones:</p>
<ul>
<li><p><strong>AWS Lambda:</strong> Introduces some latency due to cold starts and the execution time of the function. While this latency is generally small (typically within milliseconds to a couple of seconds), it does mean that it cannot be considered "true" Real-Time but rather operates in near Real-Time.</p>
</li>
<li><p><strong>WebSocket API:</strong> This component allow low-latency communication, which is closer to Real-Time, but the overall latency is influenced by the procesisng times of the connected Services.</p>
</li>
<li><p><strong>DynamoDB and Streams:</strong> Provides very fast data retrieval and storage capabilities, but Lambda triggered by streams still introduces additional processing time.</p>
</li>
</ul>
<p>Given these factors, the most accurate description for the platform should be '<strong>near Real-Time</strong>'. However, there are some benefits of this approach, from cost efficiency to scalability and simplicity, but remember that depending on the use case you're facing of for your application, Near Real-time could be the best decision.</p>
<h2 id="heading-architecture">Architecture</h2>
<p>I will explore the project's architecture, detailing various components and explaining how the serverless approach offered an appropiate and efficient solution. In addition, a workflow example will be described in order to understand better How the project works:</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1724769900184/7c1282e1-9168-4e73-91d1-07e44326e702.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-components">Components:</h3>
<ul>
<li><p><strong>Client Integration</strong>: Frontend application (Mobile &amp; WebApp) that is able to connect to the WebSocket API and receive the real time data updates.</p>
</li>
<li><p><strong>WebSocket API Gateway</strong>: Entry point of the application. It's responsible for handling the WebSocket connections and the messages that are sent to the clients.</p>
</li>
<li><p><strong>OnConnect Lambda Function</strong>: Is triggered when a client connects to the WebSocket API Gateway. Stores the connection information in the DynamoDB table.</p>
</li>
<li><p><strong>OnDisconnect Lambda Function</strong>: Is triggered when a client disconnects from the WebSocket API Gateway. Removes the connection information from the DynamoDB table.</p>
</li>
<li><p><strong>OnReceiveRealTimeItem Lambda Function</strong>: Is triggered when a new item is added to the DynamoDB table. Sends the real time data updates to the clients that are interested in the item.</p>
</li>
<li><p><strong>RealTimeData DynamoDB Table</strong>: Used to store the real time data items. Each item has a unique key and a value that represents the data that is sent to the clients.</p>
</li>
<li><p><strong>WebSocketConnectionManager DynamoDB Table</strong>: Stores the connection information of the clients that are connected to the WebSocket API Gateway. Each item has a unique connectionId and a realTimeItemKey that represents the item that the client is interested in.</p>
</li>
<li><p><strong>Integration sources</strong>: Data sources responsible for sending the real time data updates to the platform. For this project, the following sources has been considered:</p>
<ul>
<li><p><strong>SDK</strong>: AWS SDK and CLI for any language compatible.</p>
</li>
<li><p><strong>Data transfer</strong>: Any data transfer mechanism that is able to send the data to the platform. For example, data replication from other database.</p>
</li>
<li><p><strong>Device Location &amp; Sensors</strong>: IoT devices mainly sending sensor data and telemetry. For example, GPS location of a vehicle.</p>
</li>
<li><p><strong>REST API</strong>: For example, a weather API that sends the weather data to the platform.</p>
</li>
<li><p><strong>Async (SNS, SQS)</strong>: Asynchronous mechanism that is able to send the data to the platform. For example, an SNS topic that sends the data to the platform.</p>
</li>
<li><p><strong>S3 Events</strong>: Event triggered by an S3 bucket. For example, when a new file is uploaded to the bucket.</p>
</li>
</ul>
</li>
</ul>
<h3 id="heading-workflow">Workflow:</h3>
<p>Next, we are going to describe the different numbered steps marked in the diagram:</p>
<ol>
<li><p>The client integration connects to the WebSocket API Gateway.</p>
</li>
<li><p>The WebSocket API Gateway triggers the OnConnect Lambda function.</p>
</li>
<li><p>The OnConnect and OnDisconnect Lambda functions store and remove the connection information from the WebSocketConnectionManager DynamoDB table.</p>
</li>
<li><p>RealTimeData items are added to the RealTimeData DynamoDB table from the integration sources.</p>
</li>
<li><p>The OnReceiveRealTimeItem lambda function is triggered through DynamoDB Streams.</p>
</li>
<li><p>The OnReceiveRealTimeItem lambda function sends the real time data updates to the clients that are interested in the item.</p>
</li>
<li><p>The client integration receives the real time data updates from the WebSocket API Gateway.</p>
</li>
</ol>
<h2 id="heading-setup-and-test-the-platform">Setup and Test the Platform</h2>
<p>In order to test the Platform accordingly we are going to use wscat utility to simulate an API client and monitor the websocket API. This way we can allow each tes client to receive the data instantly.</p>
<pre><code class="lang-json">npm install wscat
</code></pre>
<p>We can retrieve the WebSocket API endpoint in AWS Console although is useful to have it as an output for the SAM project:</p>
<p>The Endpoint should have the following composition:</p>
<pre><code class="lang-json">wscat -c wss:<span class="hljs-comment">//[api-id].execute-api.[aws-region-id].amazonaws.com/[environment]?realTimeItemKey=[item-key]</span>
</code></pre>
<p>Notice that the <code>realTimeItemKey</code> query parameter is required to connect to the WebSocket API. This parameter is used to filter the messages that are sent to the client. The value of this parameter must be the same as the <code>realTimeItemKey</code> attribute of the item that you want to receive updates for.</p>
<p>According to the project specification, the <code>realTimeItemKey</code> is a dynamic value and can be adapted depending on the client/integration needs.</p>
<p>In our example, we are going to use vehicleId as real time item to receive updates from multiple clients, then, the <code>realTimeItemKey</code> should be <code>vehicleId</code>, and the command to connect should be:</p>
<pre><code class="lang-json">wscat -c  wss:<span class="hljs-comment">//svcz00plil.execute-api.eu-central-1.amazonaws.com/develop?vehicleId=3</span>
</code></pre>
<p>Let's see some Examples of the three main events of the Platform:</p>
<ul>
<li>OnConnect:</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/lUFf14zZsxg">https://youtu.be/lUFf14zZsxg</a></div>
<p> </p>
<ul>
<li>OnDisconnect</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/x8ZYvpD_Q1k">https://youtu.be/x8ZYvpD_Q1k</a></div>
<p> </p>
<ul>
<li>OnReceiveRealTimeItem</li>
</ul>
<div class="embed-wrapper"><div class="embed-loading"><div class="loadingRow"></div><div class="loadingRow"></div></div><a class="embed-card" href="https://youtu.be/8YyZ7hJ0Ft4">https://youtu.be/8YyZ7hJ0Ft4</a></div>
<p> </p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>This project is a fantastic example of a simple near real-time integration in a Serverless architecture. It can be used as a starting point for implementing a more complex integration or added to another project to handle real-time processing.</p>
<p>In my opinion, it's important to optimize these kinds of data integrations and make them as efficient as possible. Many projects overuse polling, which can be considered an anti-pattern for proper real-time implementation.</p>
<p>Additionally, note that a special part of the project focuses on IaC, as it's crucial to implement workloads that can be reused and reproduced in any Cloud environment (in this case, AWS).</p>
]]></content:encoded></item><item><title><![CDATA[[Lab] AWS Lambda LLRT vs Node.js]]></title><description><![CDATA[Introduction
AWS Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own backend...]]></description><link>https://amatore.dev/lab-aws-lambda-llrt-vs-nodejs</link><guid isPermaLink="true">https://amatore.dev/lab-aws-lambda-llrt-vs-nodejs</guid><category><![CDATA[AWS Lambda, Serverless, LLRT, Node.js, Performance, Benchmarking, Cost Optimization, Rust, QuickJS, AWS SAM, AWS Step Functions, DynamoDB, AWS Lambda Power Tuning, Cold Start, Real-Time Processing, JavaScript Runtime, Experimental Features]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Wed, 15 May 2024 19:04:09 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1715799253631/9f12f208-484e-499e-9324-ec4f37855660.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h1 id="heading-introduction">Introduction</h1>
<p>AWS Lambda is a serverless compute service that runs code in response to events and automatically manages the underlying resources for you. You can use AWS Lambda to extend other AWS services with custom logic, or create your own backend services that operate at AWS scale, security and the main feature considered for this lab, performance.</p>
<p>One of the biggest challenges that is common among AWS Lambda and serverless computing is the Performance. Unlike traditional server-based environments where infrastructure resources can be provisioned and fine-tuned to meet specific performance requirements, serverless platforms like Lambda abstract away much of the underlying infrastructure management, leaving less control over performance optimization in the hands of developers. Additionally, as serverless architectures rely heavily on event-driven processing and pay-per-execution pricing models, optimizing performance becomes crucial for minimizing costs, ensuring responsive user experiences, and meeting stringent latency requirements.</p>
<h2 id="heading-what-is-lambda-llrt">What is Lambda LLRT?</h2>
<blockquote>
<p>Warning</p>
<p>LLRT is an <strong>experimental</strong> package. It is subject to change and intended only for evaluation purposes.</p>
</blockquote>
<p>AWS has open-sourced its JavaScript runtime, called <a target="_blank" href="https://github.com/awslabs/llrt">LLRT (Low Latency Runtime), an experimental, lightwe</a>ight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.</p>
<p>LLRT is designed to address the growing demand for fast and efficient Serverless applications. LLRT offers up to over <strong>10x</strong> faster startup and up to <strong>2x</strong> overall lower cost compared to other JavaScript runtimes running on <strong>AWS Lambda.</strong> It's built in Rust, utilizing QuickJS as JavaScript engine, ensuring efficient memory usage and swift startup.</p>
<p>Real-Time processing or data transformation are the main use cases for adopting LLRT for future projects. LLRT Enables a focus on critical tasks like event-driven processing or streaming data, while seamless integration with other AWS services ensures rapid response times.</p>
<h3 id="heading-key-features-of-llrt">Key features of LLRT</h3>
<ol>
<li><p><strong>Faster Starup Times</strong>: <strong>Over 10x faster startup</strong> compared to other JavaScript runtimes running on AWS Lambda.</p>
</li>
<li><p><strong>Cost optimization</strong>: Up to <strong>2x overall lower cost</strong> compared to other runtimes. By optimizing memory usage and reducing startup time, Lambda LLRT helps minimize the cost of running serverless workloads.</p>
</li>
<li><p><strong>Built on Rust</strong>: Improves performance, reduced cold start times, memory efficiency, enhanced concurrency and safety.</p>
</li>
<li><p><strong>QuickJS Engine</strong>: Lightweight and efficient JavaScript engine written in C. Ideal for fast execution, efficient memory usage, and seamless integration for embedding JavaScript in AWS Lambda.</p>
</li>
<li><p><strong>No JIT compiler:</strong> unlike NodeJs, Bun &amp; Deno, LLRT not incorporates JIT compiler. That contributes to reduce system complexity and runtime size. Without the JIT overhead, CPU and memory resources can be more efficiently allocated.</p>
</li>
</ol>
<h1 id="heading-overview">Overview</h1>
<p>The goal of this lab is to conduct a comparative analysis between AWS Lambda's Low Latency Runtime (LLRT) and traditional Node.js runtime, focusing on their performance and efficiency in real-world serverless applications. By deploying identical functions with LLRT and Node.js, with the aim to evaluate their respective startup times, execution speeds, and resource consumption.</p>
<p>The previous results of performance profiling by AWS Labs, available on GitHub, offer valuable benchmarks and insights:</p>
<ul>
<li><strong>LLRT</strong> - DynamoDB Put, ARM, 128MB:</li>
</ul>
<p><img src="https://github.com/awslabs/llrt/raw/main/benchmarks/llrt-ddb-put.png" alt="DynamoDB Put LLRT" /></p>
<ul>
<li><strong>Node.js 20</strong> - DynamoDB Put, ARM, 128MB:</li>
</ul>
<p><img src="https://github.com/awslabs/llrt/raw/main/benchmarks/node20-ddb-put.png" alt="DynamoDB Put Node20" /></p>
<h1 id="heading-lab">Lab</h1>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715508078590/2a6debca-ca56-42a1-8ba0-c9c9242a999b.png" alt class="image--center mx-auto" /></p>
<p>The lab is composed by 2 Lambda functions "<strong>llrt-put-item</strong>" and "<strong>nodejs-put-item</strong>", both designed to put any received event into a DynamoDB table named "<strong>llrt-sam-table</strong>".</p>
<p>Implemented using AWS SAM, the project leverages <strong>AWS Lambda Power Tuning</strong> for performance profiling and benchmarking. While resembling the default benchmark by AWS Labs, all code developed is accessible on my personal GitHub repository, facilitating easy replication and further exploration of the lab's results:</p>
<p><a target="_blank" href="https://github.com/acriado-otc/example-aws-lambda-llrt">https://github.com/acriado-otc/example-aws-lambda-llrt</a></p>
<h2 id="heading-benchmarking-llrt-vs-nodejs">Benchmarking: LLRT vs Node.js</h2>
<p>To benchmark the lab, AWS Lambda Power Tuning has been chosen as a open-source tool developed by AWS Labs. AWS Lambda Power Tuning is a state machine powered by AWS Step Functions that will helps us to get results for both lab's Lambda functions and get results for cost and/or performance in a data-driven way.</p>
<p>While you can manually run tests on functions by selecting different memory allocations and measuring the time taken to complete, the AWS Lambda Power Tuning tool allows us to automate the process.</p>
<p>The state machine is designed to be easy to deploy and fast to execute. Also, it's language agnostic.</p>
<h2 id="heading-deployment">Deployment</h2>
<p>To deploy AWS Lambda Power Tuning, clone the official repository and choose the preferred deployment method. In my case I used SAM cli for simplicity, but I highly recommend to use AWS CDK for IaC deployments:</p>
<p><a target="_blank" href="https://github.com/alexcasalboni/aws-lambda-power-tuning">https://github.com/alexcasalboni/aws-lambda-power-tuning</a></p>
<h2 id="heading-executing-the-state-machine"><strong>Executing the State Machine</strong></h2>
<p>Independently of how you've deployed the state machine, you can execute it in a few different ways. Programmatically, using the AWS CLI, or AWS SDK. In the case of our lab, we are going to execute it manually, using the AWS Step Functions web console.</p>
<ul>
<li>Once properly deployed, You will find the new state machine in the <a target="_blank" href="https://console.aws.amazon.com/states/">Step Functions Console</a> in the AWS account and region defined for the lab:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715514088340/93af72c4-2a63-4043-a16e-48a1841d5616.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Find it and click "<strong>Start execution</strong>".</p>
</li>
<li><p>A descriptive <strong>name</strong> and <strong>input</strong> should be provided.</p>
</li>
<li><p>We are going to execute <strong>llrt-put-item first,</strong> that's the input used**:**</p>
</li>
</ul>
<pre><code class="lang-json">{
  <span class="hljs-attr">"lambdaARN"</span>: <span class="hljs-string">"arn:aws:lambda:eu-west-1:XXXXXXXXXXX:function:llrt-sam-LlrtPutItemFunction-42aAa2of5wqn"</span>,
  <span class="hljs-attr">"powerValues"</span>: [
    <span class="hljs-number">128</span>,
    <span class="hljs-number">256</span>,
    <span class="hljs-number">512</span>,
    <span class="hljs-number">1024</span>
  ],
  <span class="hljs-attr">"num"</span>: <span class="hljs-number">500</span>,
  <span class="hljs-attr">"payload"</span>: {<span class="hljs-attr">"message"</span>: <span class="hljs-string">"hello llrt lambda"</span>},
  <span class="hljs-attr">"parallelInvocation"</span>: <span class="hljs-literal">true</span>,
  <span class="hljs-attr">"strategy"</span>: <span class="hljs-string">"speed"</span>
}
</code></pre>
<ul>
<li><p>All other fields should remain by default. Finally, find it and click "<strong>Start execution</strong>".</p>
</li>
<li><p>After some seconds/minutes the Execution should have the status "<strong>Succeded</strong>"</p>
</li>
<li><p>Repeat the same process for the <strong>nodejs-put-item</strong> lambda function. The unique field to change from the input should be the function's ARN and optionally the payload.</p>
</li>
<li><p>While running each execution, we can see the status/progress of the step function execution with the graph view:</p>
</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715515501335/a9447b65-f356-423c-a85f-4d5b101c3316.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-results-and-analysis">Results and Analysis</h3>
<p>In order to get accurate results several executions have been made for each Lambda function, with 'speed' strategy. Next we are presenting the results for each Lambda separately, and a final comparison:</p>
<ul>
<li><strong>LLRT Lambda function:</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715797017499/7a97ffd4-e595-451f-8844-3637ba5c7f6e.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"power"</span>: <span class="hljs-number">1024</span>,
  <span class="hljs-attr">"cost"</span>: <span class="hljs-number">9.519999999999999e-8</span>,
  <span class="hljs-attr">"duration"</span>: <span class="hljs-number">6.25</span>,
  <span class="hljs-attr">"stateMachine"</span>: {
    <span class="hljs-attr">"executionCost"</span>: <span class="hljs-number">0.00025</span>,
    <span class="hljs-attr">"lambdaCost"</span>: <span class="hljs-number">0.0001901263</span>,
    <span class="hljs-attr">"visualization"</span>: <span class="hljs-string">"https://lambda-power-tuning.show/#gAAAAQACAAQ=;REREQWbm5kIREdFAAADIQA==;ata9Muy90zTBcEwzwXDMMw=="</span>
  }
}
</code></pre>
<ul>
<li><strong>Nodejs</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715796997284/b08bc0d5-88e7-4786-8b14-46ece87cd6b0.png" alt class="image--center mx-auto" /></p>
<pre><code class="lang-json">{
  <span class="hljs-attr">"power"</span>: <span class="hljs-number">1024</span>,
  <span class="hljs-attr">"cost"</span>: <span class="hljs-number">1.512e-7</span>,
  <span class="hljs-attr">"duration"</span>: <span class="hljs-number">8.666666666666666</span>,
  <span class="hljs-attr">"stateMachine"</span>: {
    <span class="hljs-attr">"executionCost"</span>: <span class="hljs-number">0.00025</span>,
    <span class="hljs-attr">"lambdaCost"</span>: <span class="hljs-number">0.00025384590000000003</span>,
    <span class="hljs-attr">"visualization"</span>: <span class="hljs-string">"https://lambda-power-tuning.show/#gAAAAQACAAQ=;d7fdQ7y7iEKamatBq6oKQQ==;Ckp6Nc+VmzRwbUY0ilkiNA=="</span>
  }
}
</code></pre>
<ul>
<li><strong>Comparison</strong></li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1715797378204/0e32024b-ce1e-48c4-9691-839fce0c5816.png" alt class="image--center mx-auto" /></p>
<p>After several executions of the AWS Power Tuning step function. It's clear that both lambda functions has a similar performance for 1024MB and 512MB of memory allocation. The huge difference appears in the interval from 128MB to 256MB, where LLRT is more suitable option and should be considered for small functions.</p>
<p>In terms of cost, for LLRT 128MB is the cheaper average execution, and 256MB the wrost, meanwhile for NodeJs function the wrost cost 128MB, just the opposite of LLRT. However, it's important to remark that for this laboratoy, the criteria applied for the benchmark has been 'speed'. To get more accurate cost results should be executed with 'cost' strategy accordingly.</p>
<h1 id="heading-conclusion">Conclusion</h1>
<p>LLRT clearly can add advantages for some projects, such as faster startup times and potential cost savings, but it currently lacks the stability, support, and real-world testing required to be considered for production use.</p>
<p>For developers working with smaller serverless functions that prioritize rapid response times and efficient resource utilization, LLRT is an alternative to traditional Lambda runtimes. However, it's essential to evaluate carefully LLRT and consider the specific requirements of your application before.</p>
<p>As LLRT continues to evolve and mature, it's adoption will increase for sure, and progressively a refinement of this runtime for latency-sensitive use cases will be provided by AWS. In the meantime, let's monitor LLRT's development and wait for news, specially for production environments.</p>
<h1 id="heading-references-and-resources">References and resources</h1>
<ul>
<li><p><a target="_blank" href="https://github.com/awslabs/llrt">https://github.com/awslabs/llrt</a></p>
</li>
<li><p><a target="_blank" href="https://aws.amazon.com/lambda/features/">https://aws.amazon.com/lambda/features/</a></p>
</li>
<li><p><a target="_blank" href="https://bellard.org/quickjs/">https://bellard.org/quickjs/</a></p>
</li>
<li><p><a target="_blank" href="https://docs.aws.amazon.com/lambda/latest/operatorguide/profile-functions.html">https://docs.aws.amazon.com/lambda/latest/operatorguide/profile-functions.html</a></p>
</li>
<li><p><a target="_blank" href="https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:451282441545:applications~aws-lambda-power-tuning">https://serverlessrepo.aws.amazon.com/applications/arn:aws:serverlessrepo:us-east-1:451282441545:applications~aws-lambda-power-tuning</a></p>
</li>
</ul>
]]></content:encoded></item><item><title><![CDATA[Top AWS re:Invent 2023 Announcements]]></title><description><![CDATA[Introduction
AWS re:Invent stands as a pivotal learning conference hosted by AWS, serving the global cloud-computing community. This immersive in-person event goes beyond traditional conferences, offering keynote announcements, extensive training and...]]></description><link>https://amatore.dev/top-aws-reinvent-2023-announcements</link><guid isPermaLink="true">https://amatore.dev/top-aws-reinvent-2023-announcements</guid><category><![CDATA[AWS]]></category><category><![CDATA[reInvent]]></category><category><![CDATA[reInvent2023]]></category><category><![CDATA[#CloudArchitecture]]></category><category><![CDATA[Devops]]></category><category><![CDATA[Announcement]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Fri, 19 Jan 2024 10:31:46 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1705660404441/2be78115-cfff-4951-8335-4535aea05ae3.jpeg" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>AWS re:Invent stands as a pivotal learning conference hosted by AWS, serving the global cloud-computing community. This immersive in-person event goes beyond traditional conferences, offering keynote announcements, extensive training and certification opportunities, over 2,000 technical sessions, an expansive Expo, and engaging after-hours events. The sheer scale and breadth of AWS re:Invent make it a must-attend for cloud professionals worldwide.<br />In the context of this article, we'll narrow our focus to the specific realm of Cloud Architecture and DevOps. As a Cloud Architect, I find immense value in exploring and dissecting the announcements that directly impact my professional domain. Join me in uncovering the latest innovations and advancements unveiled during AWS re:Invent 2023, with a keen emphasis on services and updates relevant to Cloud Architecture and DevOps.</p>
<h2 id="heading-keynote-highlights">Keynote Highlights</h2>
<p>During almost one week, the main AWS executives appeared and we have several interesting Keynotes trying to summarize the next year predictions and the impact of them. Despite generative AI (GenAI) dominated the conference as a central theme, there was more than 140 announces from different topics.<br />From the point of view of Cloud Architecture, I highly recommend the Keynote of Dr. Werner Vogels, but here ara a link to the rest of them:</p>
<ul>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=PMfn9_nTDbM&amp;list=PL2yQDdvlhXf_yTJdRlfK7K1ARdhYHhUvR&amp;index=4">CEO Keynote with Adam Selipsky</a>: Amazon Web Services CEO shares his perspective on cloud transformation and highlights innovations in data, infrastructure, and artificial intelligence and machine learning.</p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=pJG6nmR7XxI&amp;t=3s">Monday Night Live Keynote with Peter DeSantis</a>: Senior Vice President of AWS Utility Computing, dives deep into the engineering that powers AWS services.</p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=8clH7cbnIQw&amp;list=PL2yQDdvlhXf_yTJdRlfK7K1ARdhYHhUvR&amp;index=3">Keynote with Dr. Swami Sivasubramanian</a>: Vice President of Data and AI at AWS explores the powerful relationship between humans, data, and AI, unfolding right before us.</p>
</li>
<li><p><a target="_blank" href="https://www.youtube.com/watch?v=UTRBVPvzt9w&amp;list=PL2yQDdvlhXf_yTJdRlfK7K1ARdhYHhUvR&amp;index=2"><strong>Keynote with Dr. Werner Vogels</strong></a><strong>:</strong> <a target="_blank" href="https://www.youtube.com/watch?v=PMfn9_nTDbM&amp;list=PL2yQDdvlhXf_yTJdRlfK7K1ARdhYHhUvR&amp;index=4"><strong>Amazon.com</strong></a><strong>’s VP and CTO, covers best practices for designing resilient and cost-aware architectures, and discusses why artificial intelligence is something every builder must consider when developing systems and the impact this will have in our world.</strong></p>
</li>
</ul>
<h2 id="heading-cutting-edge-services-and-features">Cutting-Edge Services and Features:</h2>
<h3 id="heading-developer-tools">Developer tools</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Announcement</td><td>Description</td><td>Blog Link</td></tr>
</thead>
<tbody>
<tr>
<td><strong>AWS Fault Injection Service</strong></td><td>Utilize AWS Fault Injection Service to showcase the resilience of multi-region and multi-AZ applications. Explore new scenarios that demonstrate application performance in the face of specific failure scenarios.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/use-aws-fault-injection-service-to-demonstrate-multi-region-and-multi-az-application-resilience/">Link to Blog</a></td></tr>
<tr>
<td><strong>AWS Application Composer IDE Extension</strong></td><td>Elevate visual modern applications development with the IDE extension for AWS Application Composer. Leverage AI-generated Infrastructure as Code (IaC) seamlessly within your IDE. Build modern applications and iterate on infrastructure code templates using Amazon CodeWhisperer.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/ide-extension-for-aws-application-composer-enhances-visual-modern-applications-development-with-ai-generated-iac/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon Q Code Transformation (Preview)</strong></td><td>Streamline the process of upgrading Java applications with Amazon Q Code Transformation. This preview feature simplifies the modernization of existing application code using Amazon Q.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/upgrade-your-java-applications-with-amazon-q-code-transformation-preview/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon Q in Amazon CodeCatalyst (Preview)</strong></td><td>Boost developer productivity with generative-AI-powered Amazon Q in Amazon CodeCatalyst. Easily transition from conceptualizing ideas to producing fully tested, merge-ready, and running code with just a few natural language inputs.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/improve-developer-productivity-with-generative-ai-powered-amazon-q-in-amazon-codecatalyst-preview/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon CodeCatalyst Updates</strong></td><td>Introducing custom blueprints in Amazon CodeCatalyst. Additionally, a new enterprise pricing tier is available, offering project lifecycle management along with the custom blueprints.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-codecatalyst-introduces-custom-blueprints-and-a-new-enterprise-tier/">Link to Blog</a></td></tr>
</tbody>
</table>
</div><h3 id="heading-generative-ai-machine-learning">Generative AI / Machine Learning</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Announcement</td><td>Description</td><td>Blog Link</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Amazon SageMaker Studio Enhancements</strong></td><td>Amazon SageMaker Studio introduces a web-based interface, Code Editor, flexible workspaces, and streamlines user onboarding. The new interface loads faster, providing consistent access to your preferred IDE and SageMaker resources.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-sagemaker-studio-adds-web-based-interface-code-editor-flexible-workspaces-and-streamlines-user-onboarding/">Link to Blog</a></td></tr>
<tr>
<td><strong>Package and Deploy Models in Amazon SageMaker</strong></td><td>Accelerate model deployment with new tools and guided workflows in Amazon SageMaker. The SageMaker Python SDK now includes the ModelBuilder class for packaging models, performing local inference, and deploying to SageMaker from your local IDE or SageMaker Studio notebooks.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/package-and-deploy-models-faster-with-new-tools-and-guided-workflows-in-amazon-sagemaker/">Link to Blog</a></td></tr>
<tr>
<td><strong>Explore and Prepare Data with Amazon SageMaker Canvas</strong></td><td>Use natural language to explore and prepare data with Amazon SageMaker Canvas. This capability, complemented by foundation model (FM)-powered natural language instructions, enhances data exploration, analysis, visualization, and transformation.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/use-natural-language-to-explore-and-prepare-data-with-a-new-capability-of-amazon-sagemaker-canvas/">Link to Blog</a></td></tr>
<tr>
<td><strong>Evaluate Models in Amazon Bedrock (Preview)</strong></td><td>Experiment with models, add automatic evaluations, and incorporate human reviews in the playground environment of Amazon Bedrock. Evaluate, compare, and select the best foundation models for your use case.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/evaluate-compare-and-select-the-best-foundation-models-for-your-use-case-in-amazon-bedrock-preview/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon SageMaker HyperPod for Distributed Training</strong></td><td>Introducing Amazon SageMaker HyperPod, a purpose-built infrastructure for distributed training at scale. Train foundation models for extended periods while benefiting from automated cluster health monitoring and job resiliency.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/introducing-amazon-sagemaker-hyperpod-a-purpose-built-infrastructure-for-distributed-training-at-scale/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon Titan Image Generator, Multimodal Embeddings, and Text Models in Amazon Bedrock</strong></td><td>Amazon Titan models, encompassing 25 years of AI and ML innovation, are now available in Amazon Bedrock. Access high-performing image, multimodal, and text model options through a fully managed API.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-titan-image-generator-multimodal-embeddings-and-text-models-are-now-available-in-amazon-bedrock/">Link to Blog</a></td></tr>
<tr>
<td><strong>Claude 2.1 Model in Amazon Bedrock</strong></td><td>Amazon Bedrock now provides access to Anthropic’s latest model, Claude 2.1. Featuring an industry-leading 200,000 token context window, reduced hallucination rates, improved accuracy for long documents, system prompts, and a beta tool use feature.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-bedrock-now-provides-access-to-anthropics-latest-model-claude-2-1/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon Q: Generative AI-powered Assistant (Preview)</strong></td><td>Introducing Amazon Q, a new generative AI-powered assistant. Use Amazon Q for conversations, problem-solving, content generation, gaining insights, and taking action by connecting to your company’s information repositories, code, data, and enterprise systems.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/introducing-amazon-q-a-new-generative-ai-powered-assistant-preview/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon Q for IT Pros and Developers (Preview)</strong></td><td>Amazon Q brings generative AI-powered assistance to IT pros and developers. Minimize the time and effort required to gain knowledge, explore new AWS capabilities, learn unfamiliar technologies, and architect innovative solutions.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-q-brings-generative-ai-powered-assistance-to-it-pros-and-developers-preview/">Link to Blog</a></td></tr>
<tr>
<td><strong>Guardrails for Amazon Bedrock (Preview)</strong></td><td>Implement safeguards customized to your use cases and responsible AI policies with Guardrails for Amazon Bedrock. Promote safe interactions between users and generative AI applications.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/guardrails-for-amazon-bedrock-helps-implement-safeguards-customized-to-your-use-cases-and-responsible-ai-policies-preview/">Link to Blog</a></td></tr>
<tr>
<td><strong>Agents for Amazon Bedrock with Improved Control</strong></td><td>Agents for Amazon Bedrock is now available with improved control of orchestration and visibility into reasoning. Accelerate generative AI application development by orchestrating multistep tasks.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/agents-for-amazon-bedrock-is-now-available-with-improved-control-of-orchestration-and-visibility-into-reasoning/">Link to Blog</a></td></tr>
<tr>
<td><strong>Customize Models with Fine-tuning in Amazon Bedrock</strong></td><td>Privately and securely customize foundation models in Amazon Bedrock with your own data. Fine-tune models to build applications specific to your domain, organization, and use case.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/customize-models-in-amazon-bedrock-with-your-own-data-using-fine-tuning-and-continued-pre-training/">Link to Blog</a></td></tr>
<tr>
<td><strong>Knowledge Bases in Amazon Bedrock for RAG Experience</strong></td><td>Knowledge Bases now deliver a fully managed Retrieval Augmented Generation (RAG) experience in Amazon Bedrock. Securely connect foundation models to your company data for enhanced capabilities.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/knowledge-bases-now-delivers-fully-managed-rag-experience-in-amazon-bedrock/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon Transcribe Call Analytics (Preview)</strong></td><td>Amazon Transcribe Call Analytics, powered by Amazon Bedrock, introduces new generative AI-powered call summaries. Improve customer experience, and agent and supervisor productivity by automatically summarizing customer service calls.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-transcribe-call-analytics-adds-new-generative-ai-powered-call-summaries-preview/">Link to Blog</a></td></tr>
<tr>
<td><strong>Build Generative AI Apps with AWS Step Functions</strong></td><td>Build generative AI apps using AWS Step Functions and Amazon Bedrock. Step Functions provides two new optimized API actions for Amazon Bedrock: InvokeModel and CreateModelCustomizationJob.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/build-generative-ai-apps-using-aws-step-functions-and-amazon-bedrock/">Link to Blog</a></td></tr>
<tr>
<td><strong>Amazon CodeWhisperer Enhancements</strong></td><td>Amazon CodeWhisperer now offers new AI-powered code remediation, IaC support, and integration with Visual Studio. Enhance automation, security, efficiency, and accelerate code delivery with these new features.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-codewhisperer-offers-new-ai-powered-code-remediation-iac-support-and-integration-with-visual-studio/">Link to Blog</a></td></tr>
</tbody>
</table>
</div><h3 id="heading-application-integration">Application Integration</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Announcement</td><td>Description</td><td>Blog Link</td></tr>
</thead>
<tbody>
<tr>
<td><strong>AWS Step Functions</strong> Workflow Studio in AWS Application Composer</td><td>This new integration brings together the development of workflows and application resources into a unified visual infrastructure as code (IaC) builder.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/aws-step-functions-workflow-studio-is-now-available-in-aws-application-composer/">Link to Blog</a></td></tr>
</tbody>
</table>
</div><h3 id="heading-cost-optimization">Cost Optimization</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Announcement</td><td>Description</td><td>Blog Link</td></tr>
</thead>
<tbody>
<tr>
<td>Check your AWS Free Tier usage programmatically with a new API</td><td>You can use the API directly with the AWS Command Line Interface or integrate it into an application with the AWS SDKs.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/check-your-aws-free-tier-usage-programmatically-with-a-new-api/">Link to Blog</a></td></tr>
<tr>
<td>New Cost Optimization Hub centralizes recommended actions to save you money</td><td>This new AWS Billing and Cost Management feature makes it easy for you to identify, filter, aggregate, and quantify savings for AWS cost optimization recommendations.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/new-cost-optimization-hub-to-find-all-recommended-actions-in-one-place-for-saving-you-money/">Link to Blog</a></td></tr>
<tr>
<td>New Amazon WorkSpaces Thin Client provides cost-effective, secure access to virtual desktops</td><td>The Thin Client is a small cube that connects directly to a monitor, keyboard, mouse, and other USB peripherals such as headsets, microphones, and cameras.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/new-amazon-workspaces-thin-client/">Link to Blog</a></td></tr>
<tr>
<td>New Amazon CloudWatch log class for infrequent access logs at a reduced price</td><td>This new log class offers a tailored set of capabilities at a lower cost for infrequently accessed logs, enabling customers to consolidate all their logs in one place in a cost-effective manner.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/new-amazon-cloudwatch-log-class-for-infrequent-access-logs-at-a-reduced-price/">Link to Blog</a></td></tr>
<tr>
<td>Optimize your storage costs for rarely-accessed files with Amazon EFS Archive</td><td>We’ve added a new storage class for Amazon Elastic File System optimized for long-lived data that is rarely accessed.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/optimize-your-storage-costs-for-rarely-accessed-files-with-amazon-efs-archive/">Link to Blog</a></td></tr>
</tbody>
</table>
</div><h3 id="heading-database">Database</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Announcement</td><td>Description</td><td>Blog Link</td></tr>
</thead>
<tbody>
<tr>
<td>Amazon Redshift adds new AI capabilities, including Amazon Q, to boost efficiency and productivity</td><td>Now you can get SQL recommendations from natural language prompts, and Redshift now scales capacity proactively and automatically to deliver tailored performance optimizations.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-redshift-adds-new-ai-capabilities-to-boost-efficiency-and-productivity/">Link to Blog</a></td></tr>
<tr>
<td>Vector search for Amazon DocumentDB (with MongoDB compatibility) is now generally available</td><td>This new built-in capability lets you store, index, and search millions of vectors with millisecond response times within your document database.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/vector-search-for-amazon-documentdb-with-mongodb-compatibility-is-now-generally-available/">Link to Blog</a></td></tr>
<tr>
<td>Amazon DynamoDB zero-ETL integration with Amazon OpenSearch Service is now available</td><td>This capability lets you perform a search on your DynamoDB data by automatically replicating and transforming it without custom code or infrastructure.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-dynamodb-zero-etl-integration-with-amazon-opensearch-service-is-now-generally-available/">Link to Blog</a></td></tr>
<tr>
<td>Amazon ElastiCache Serverless for Redis and Memcached is now available</td><td>This new serverless offering allows customers to create a cache in under a minute and instantly scale capacity based on application traffic patterns.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-elasticache-serverless-for-redis-and-memcached-now-generally-available/">Link to Blog</a></td></tr>
<tr>
<td>Join the preview of Amazon Aurora Limitless Database</td><td>This new capability supports automated horizontal scaling to process millions of write transactions per second and manage petabytes of data in a single Aurora database.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/join-the-preview-amazon-aurora-limitless-database/">Link to Blog</a></td></tr>
</tbody>
</table>
</div><h3 id="heading-storage">Storage</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Announcement</td><td>Description</td><td>Blog Link</td></tr>
</thead>
<tbody>
<tr>
<td><strong>Amazon S3 Express One Zone</strong> high performance storage class</td><td>The new Amazon S3 Express One Zone storage class is designed to deliver up to 10x better performance than the S3 Standard storage class and is a great fit for your most frequently accessed data and your most demanding applications.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/new-amazon-s3-express-one-zone-high-performance-storage-class/">Link to Blog</a></td></tr>
</tbody>
</table>
</div><h3 id="heading-security">Security</h3>
<div class="hn-table">
<table>
<thead>
<tr>
<td>Announcement</td><td>Description</td><td>Blog Link</td></tr>
</thead>
<tbody>
<tr>
<td>Three new capabilities for Amazon Inspector broaden the realm of vulnerability scanning for workloads</td><td>Amazon Inspector introduces a new set of open source plugins and an API, continuous monitoring for your Amazon EC2 instances, and generative AI-powered assisted code remediation for your AWS Lambda functions.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/three-new-capabilities-for-amazon-inspector-broaden-the-realm-of-vulnerability-scanning-for-workloads/">Link to Blog</a></td></tr>
<tr>
<td>Amazon Detective adds new capabilities to accelerate and improve your cloud security investigations</td><td>Amazon Detective adds four new capabilities to help you save time and strengthen your security operations.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/amazon-detective-adds-investigations-and-finding-group-summaries-to-help-you-investigate-security-findings/">Link to Blog</a></td></tr>
<tr>
<td>Detect runtime security threats in Amazon ECS and AWS Fargate, new in Amazon GuardDuty</td><td>The new capability helps detect potential runtime security issues in Amazon Elastic Container Service (Amazon ECS) clusters running on both AWS Fargate and Amazon Elastic Compute Cloud (Amazon EC2).</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/introducing-amazon-guardduty-ecs-runtime-monitoring-including-aws-fargate/">Link to Blog</a></td></tr>
<tr>
<td>IAM Access Analyzer updates: Find unused access, check policies before deployment</td><td>A new analyzer continuously monitors roles and users looking for permissions that are granted but not actually used, and a policy checker validates that newly authored policies do not grant additional (and perhaps unintended) permissions.</td><td><a target="_blank" href="https://aws.amazon.com/blogs/aws/iam-access-analyzer-updates-find-unused-access-check-policies-before-deployment/">Link to Blog</a></td></tr>
</tbody>
</table>
</div><h2 id="heading-conclusion">Conclusion</h2>
<p>As a Cloud Architect, I consistently prioritize alignment with the Cloud Pillars and Well-Architected frameworks, not only within AWS projects but also across various cloud providers. These frameworks serve as standard best practices for cloud adoption. AWS re:Invent stands out as one of the premier IT innovation events globally, adopting a comprehensive approach to presenting information. All pillars and best practices are meticulously covered, showcasing AWS's commitment to innovation and solidifying its leadership in the cloud services domain.<br />Choosing a concise list of announcements for Cloud Architecture and DevOps from the presented at AWS re:Invent is challenging. The summarized tables aim to reflect my perspective on the most impactful services, particularly from an architect's standpoint.<br />While AWS re:Invent 2022 had a focus on low code/no-code innovations, this year has unmistakably brought generative AI to the forefront, influencing various resources concurrently. Amazon Q, in particular, appears to be a revolutionary way to interact with AWS documentation, addressing a common daily practice for numerous profiles. Its potential to enhance the already robust documentation with a layer of intelligent generative AI is promising.<br />For those who missed keynotes and sessions, they are available for viewing on the AWS re:Invent site: <a target="_blank" href="https://reinvent.awsevents.com/">AWS re:Invent 2023</a></p>
]]></content:encoded></item><item><title><![CDATA[Kubernetes AutoScaling Guide with Horizontal Pod Autoscaler (HPA)]]></title><description><![CDATA[Introduction
AutoScaling in Kubernetes is a critical feature that allows applications to automatically adapt to workload variations, ensuring optimal performance and efficient resource management within the cluster. The Horizontal Pod Autoscaler (HPA...]]></description><link>https://amatore.dev/kubernetes-autoscaling-guide-with-horizontal-pod-autoscaler-hpa</link><guid isPermaLink="true">https://amatore.dev/kubernetes-autoscaling-guide-with-horizontal-pod-autoscaler-hpa</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[autoscaling]]></category><category><![CDATA[HPA]]></category><category><![CDATA[Grafana]]></category><category><![CDATA[Horizontal Pod Autoscaler]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Thu, 28 Sep 2023 10:52:02 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695898477112/fcebc0f3-2f7c-4e20-9ce7-93984abaad75.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>AutoScaling in Kubernetes is a critical feature that allows applications to automatically adapt to workload variations, ensuring optimal performance and efficient resource management within the cluster. The Horizontal Pod Autoscaler (HPA) is a tool that automates the adjustment of the number of pod replicas based on predefined metrics.</p>
<p>In this guide, you'll learn how auto-scaling works in Kubernetes through a real example HPA configuration.</p>
<p>What is <strong>Horizontal Pod Autoscaler (HPA)?</strong></p>
<ul>
<li><p>Scales the number of pod replicas horizontally based on metrics like CPU and memory utilization.</p>
</li>
<li><p>Ideal for applications with varying workload demands and where adding or removing identical replicas is effective.</p>
</li>
<li><p>Offers flexibility in adapting to changing traffic and resource needs.</p>
</li>
<li><p>Requires that applications are designed to be stateless or capable of handling pod replication.</p>
</li>
</ul>
<h3 id="heading-key-considerations"><strong>Key Considerations</strong></h3>
<p>In the journey to optimize your Kubernetes deployment and ensure efficient autoscaling, it's essential to weigh the factors that influence your choice of an autoscaling solution. There are several considerations that should be aligned with your application's requirements and broader objectives:</p>
<ol>
<li><p><strong>Application Suitability</strong>: Evaluate if your application can benefit from horizontal scaling.</p>
</li>
<li><p><strong>Resource Metrics</strong>: Ensure that your performance bottlenecks relate to metrics like CPU and memory.</p>
</li>
<li><p><strong>Cluster Size</strong>: Consider the cluster's size and capacity for effective autoscaling.</p>
</li>
<li><p><strong>Monitoring and Alerting</strong>: Implement monitoring and alerting for resource utilization and performance.</p>
</li>
<li><p><strong>Pod Design</strong>: Ensure pods are stateless and replicable for effective HPA.</p>
</li>
<li><p><strong>Testing</strong>: Thoroughly test HPA in a non-production environment.</p>
</li>
<li><p><strong>Scaling Policies</strong>: Define clear scaling policies aligned with your objectives.</p>
</li>
<li><p><strong>Resource Quotas</strong>: Be aware of resource quotas to prevent overscaling.</p>
</li>
<li><p><strong>Cost Implications</strong>: Understand how autoscaling affects cloud infrastructure costs.</p>
</li>
<li><p><strong>Maintenance and Updates</strong>: Keep configurations and policies up to date.</p>
</li>
<li><p><strong>Alternative Solutions</strong>: Consider alternative autoscaling solutions like Vertical Pod Autoscaler or custom controllers if they better fit your use case.</p>
</li>
</ol>
<h2 id="heading-hpa-configuration-example">HPA Configuration Example</h2>
<pre><code class="lang-yaml"><span class="hljs-attr">apiVersion:</span> <span class="hljs-string">autoscaling/v2</span>
<span class="hljs-attr">kind:</span> <span class="hljs-string">HorizontalPodAutoscaler</span>
<span class="hljs-attr">metadata:</span>
  <span class="hljs-attr">name:</span> <span class="hljs-string">app-autoscaler</span>
  <span class="hljs-attr">namespace:</span> <span class="hljs-string">app-namespace</span>
<span class="hljs-attr">spec:</span>
  <span class="hljs-attr">scaleTargetRef:</span>
    <span class="hljs-attr">apiVersion:</span> <span class="hljs-string">apps/v1</span>
    <span class="hljs-attr">kind:</span> <span class="hljs-string">Deployment</span>
    <span class="hljs-attr">name:</span> <span class="hljs-string">app-deployment</span>
  <span class="hljs-attr">minReplicas:</span> <span class="hljs-number">5</span>
  <span class="hljs-attr">maxReplicas:</span> <span class="hljs-number">15</span>
  <span class="hljs-attr">metrics:</span>
  <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Resource</span>
    <span class="hljs-attr">resource:</span>
      <span class="hljs-attr">name:</span> <span class="hljs-string">cpu</span>
      <span class="hljs-attr">target:</span>
        <span class="hljs-attr">type:</span> <span class="hljs-string">Utilization</span>
        <span class="hljs-attr">averageUtilization:</span> <span class="hljs-number">75</span>
  <span class="hljs-attr">behavior:</span>
    <span class="hljs-attr">scaleDown:</span>
      <span class="hljs-attr">stabilizationWindowSeconds:</span> <span class="hljs-number">300</span>
      <span class="hljs-attr">policies:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Percent</span>
        <span class="hljs-attr">value:</span> <span class="hljs-number">25</span>
        <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">10</span>
    <span class="hljs-attr">scaleUp:</span>
      <span class="hljs-attr">stabilizationWindowSeconds:</span> <span class="hljs-number">0</span>
      <span class="hljs-attr">policies:</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Percent</span>
        <span class="hljs-attr">value:</span> <span class="hljs-number">100</span>
        <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">15</span>
      <span class="hljs-bullet">-</span> <span class="hljs-attr">type:</span> <span class="hljs-string">Pods</span>
        <span class="hljs-attr">value:</span> <span class="hljs-number">2</span>
        <span class="hljs-attr">periodSeconds:</span> <span class="hljs-number">15</span>
      <span class="hljs-attr">selectPolicy:</span> <span class="hljs-string">Max</span>
</code></pre>
<h3 id="heading-explanation">Explanation:</h3>
<ul>
<li><p><strong>metadata</strong>: In this section, you set the name and namespace of the HPA.</p>
</li>
<li><p><strong>spec</strong>: Here, you define the HPA's characteristics:</p>
<ul>
<li><p><strong>scaleTargetRef</strong>: It selects the resource to be automatically scaled, in this case, the Deployment named "app-deployment."</p>
</li>
<li><p><strong>minReplicas</strong>: It ensures that at least 5 pod replicas are always running.</p>
</li>
<li><p><strong>maxReplicas</strong>: It allows a maximum of 15 pod replicas.</p>
</li>
</ul>
</li>
<li><p><strong>metrics</strong>: These metrics specify the criteria for scaling decisions. In this example, CPU usage is used with an average utilization target of 75%.</p>
</li>
<li><p><strong>behavior</strong>: It configures the scaling behavior:</p>
<ul>
<li><p><strong>scaleDown</strong>: Specifies how scaling down occurs. If CPU usage decreases by 25% within a 10-second window, it will reduce the number of replicas.</p>
</li>
<li><p><strong>scaleUp</strong>: Defines how scaling up is performed. If CPU usage exceeds 100% within a 15-second window or if the number of pods exceeds the 15-second window by 2, it will add more replicas.</p>
</li>
</ul>
</li>
</ul>
<h2 id="heading-monitoring-hpa-in-kubernetes">Monitoring HPA in Kubernetes</h2>
<p>Monitoring Horizontal Pod Autoscaler (HPA) is essential to ensure effective autoscaling. Several monitoring mechanisms should be considered and used accordingly depending on several factors, such as the project, application and team expertise. For the scope of this article, we're going to evaluate prometheus/grafana as the main monitoring approach:</p>
<ol>
<li><p><strong>Install Prometheus and Grafana</strong>:</p>
<ul>
<li><p>Deploy Prometheus and Grafana in your Kubernetes cluster using Helm charts or manual installation.</p>
</li>
<li><p>Set up Prometheus to scrape HPA-related metrics by adding a scrape configuration to your Prometheus config.</p>
</li>
<li><p>Create or import Grafana dashboards tailored for HPA monitoring. Customize them to fit your needs.</p>
</li>
</ul>
</li>
<li><p><strong>Visualize Key Metrics</strong>:</p>
<ul>
<li><p>In Grafana, visualize metrics like CPU and memory utilization, replica counts, and scaling events.</p>
<ul>
<li>Example:</li>
</ul>
</li>
</ul>
</li>
</ol>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695898068355/fdf307e2-3a1f-464c-8f03-db0c9217e131.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Set up alerts for scaling events and resource utilization thresholds.</li>
</ul>
<ol>
<li><p><strong>Evaluate Scaling Performance</strong>:</p>
<ul>
<li><p>Monitor HPA behavior, ensuring it scales pods in response to changing workloads.</p>
</li>
<li><p>Analyze historical data to optimize scaling configurations.</p>
</li>
</ul>
</li>
<li><p><strong>Continuous Monitoring</strong>:</p>
<ul>
<li>Regularly review and adjust your monitoring setup to proactively address scaling issues and anomalies.</li>
</ul>
</li>
<li><p><strong>Enhance with Logging and Tracing</strong>:</p>
<ul>
<li>Consider adding logging and tracing solutions for deeper insights during scaling events.</li>
</ul>
</li>
</ol>
<h2 id="heading-conclusion">Conclusion</h2>
<p>The use of the Horizontal Pod Autoscaler (HPA) in Kubernetes is essential to ensure that applications can automatically adapt to changes in workload demand. This configuration maintains a minimum number of active replicas (5 in this case) and scales up to a maximum of 15 replicas based on CPU utilization.</p>
<p>As resource needs change, the HPA will automatically adjust the number of replicas to ensure optimal performance and efficient resource management within the cluster. AutoScaling is fundamental for ensuring availability and performance in dynamically changing Kubernetes environments.</p>
]]></content:encoded></item><item><title><![CDATA[Step-by-Step Guide: Connecting to Amazon EKS and Monitoring with Lens]]></title><description><![CDATA[Introduction
This guide is your roadmap to seamlessly connect your developer workstation to an Amazon Elastic Kubernetes Service (EKS) cluster and harness the power of Lens for effective monitoring and management. In this walkthrough, we'll walk you ...]]></description><link>https://amatore.dev/step-by-step-guide-connecting-to-amazon-eks-and-monitoring-with-lens</link><guid isPermaLink="true">https://amatore.dev/step-by-step-guide-connecting-to-amazon-eks-and-monitoring-with-lens</guid><category><![CDATA[Kubernetes]]></category><category><![CDATA[EKS]]></category><category><![CDATA[lens]]></category><category><![CDATA[AWS]]></category><category><![CDATA[AmazonEKS]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Thu, 28 Sep 2023 09:48:22 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695894876915/b5f965eb-1a7a-4b93-aae4-cc7e08175a64.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>This guide is your roadmap to seamlessly connect your developer workstation to an Amazon Elastic Kubernetes Service (EKS) cluster and harness the power of Lens for effective monitoring and management. In this walkthrough, we'll walk you through the essential steps required to establish this connection, ensuring you have the tools and configurations in place to streamline your Kubernetes operations.</p>
<p>Whether you're an experienced Kubernetes practitioner or just beginning your journey with EKS, this guide will equip you with the knowledge and resources to make the process straightforward. Let's embark on this journey to unlock the potential of EKS and Lens for your Kubernetes development and administration needs.</p>
<h2 id="heading-steps-to-follow">Steps to Follow</h2>
<h3 id="heading-step-1-configure-the-aws-environment">Step 1: Configure the AWS Environment</h3>
<p>You need to obtain the "Access Key" and "Secret Access Key" to interact with AWS services programmatically or through the AWS CLI.</p>
<p><a target="_blank" href="https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html">https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-files.html</a></p>
<h3 id="heading-step-2-install-kubectl">Step 2: Install kubectl</h3>
<p>Kubectl is a command-line tool that allows you to interact with Kubernetes clusters. You can download and install it by following the instructions in the official Kubernetes documentation:</p>
<p><a target="_blank" href="https://kubernetes.io/docs/tasks/tools/install-kubectl/">https://kubernetes.io/docs/tasks/tools/install-kubectl/</a></p>
<h3 id="heading-step-3-install-lens">Step 3: Install Lens</h3>
<p>Lens is a Kubernetes cluster management tool that allows you to visualize and manage Kubernetes clusters centrally. You can download and install Lens from the official website:</p>
<p><a target="_blank" href="https://k8slens.dev/">https://k8slens.dev/</a></p>
<h3 id="heading-step-4-configure-kubectl-for-eks">Step 4: Configure kubectl for EKS</h3>
<p>With the AWS CLI properly installed and configured, to install kubectl, you need to execute the following command:</p>
<pre><code class="lang-plaintext">aws eks --region eu-central-1 update-kubeconfig --name cluster-stack
</code></pre>
<ul>
<li>Verify the Connection:</li>
</ul>
<p>To ensure that kubectl is configured correctly and can connect to the EKS cluster, you should run the following command:</p>
<pre><code class="lang-plaintext">kubectl config get-contexts
</code></pre>
<p>You should see a list of Kubernetes contexts, including the context for your EKS cluster that was created earlier.</p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695894476812/7509ffa1-17ba-4caf-b5f9-af6b0f40d439.png" alt class="image--center mx-auto" /></p>
<h3 id="heading-step-6-open-lens-and-add-the-cluster">Step 6: Open Lens and Add the Cluster</h3>
<p>Open the Lens application on your local machine.</p>
<ul>
<li>Click "Add Cluster" on the main screen.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695893753639/75b1bd32-3f19-4077-8e4b-23237fabc24b.png" alt class="image--center mx-auto" /></p>
<ul>
<li>Select "Kubeconfig" as the authentication method.</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695894112796/ed1344d1-1096-45d4-975b-e1291fb91ac4.png" alt class="image--center mx-auto" /></p>
<ul>
<li><p>Click "Browse" and select the kubectl configuration file generated in Step 4 (usually located at ~/.kube/config).</p>
</li>
<li><p>Click "Next" and provide a name for the cluster.</p>
</li>
<li><p>Click "Connect" to add the cluster to Lens.</p>
</li>
</ul>
<p>You should now be able to view and manage your EKS cluster in Lens remotely from your local machine.</p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>To put it simply, Lens is a fantastic choice for connecting to your Amazon Elastic Kubernetes Service (EKS) clusters. Why? Well, it's like having a super user-friendly control center for all things Kubernetes. It makes monitoring and managing your clusters easier.</p>
<p>With Lens, you get a clear and easy-to-use dashboard cluster's health and performance. So, if you want to make your Kubernetes life easier and more efficient, Lens is the way to go!</p>
]]></content:encoded></item><item><title><![CDATA[Step-by-Step Guide: Enabling Versioning on Amazon S3]]></title><description><![CDATA[Introduction
Amazon S3 (Simple Storage Service) is a scalable and durable cloud-based storage service by Amazon Web Services (AWS) for storing and retrieving data, files, and objects. In this tutorial, we will explore the capability of enabling versi...]]></description><link>https://amatore.dev/step-by-step-guide-enabling-versioning-on-amazon-s3</link><guid isPermaLink="true">https://amatore.dev/step-by-step-guide-enabling-versioning-on-amazon-s3</guid><category><![CDATA[S3]]></category><category><![CDATA[versioning]]></category><category><![CDATA[cloud-storage]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Wed, 27 Sep 2023 17:11:12 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695833854881/0f1efa8c-ca12-44da-8a35-8c7d677a32a3.avif" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Amazon S3 (Simple Storage Service) is a scalable and durable cloud-based storage service by Amazon Web Services (AWS) for storing and retrieving data, files, and objects. In this tutorial, we will explore the capability of enabling versioning in Amazon S3. Whether you're a seasoned cloud architect or a newcomer to the cloud, understanding how versioning works and how to implement it can be a game-changer in maintaining data consistency, recovering from accidental deletions, and complying with regulatory requirements.</p>
<h2 id="heading-what-is-versioning-in-s3">What is versioning in S3?</h2>
<p>S3 versioning is a feature on Amazon S3 that allows you to preserve, retrieve, and manage multiple versions of objects stored in a bucket.</p>
<h4 id="heading-key-concepts">Key concepts</h4>
<ul>
<li><p>It is used to preserve, retrieve, and restore every version of every object stored in an S3 bucket.</p>
</li>
<li><p>Done at the S3 Bucket level.</p>
</li>
<li><p>Can be enabled from the AWS Console / SDKs / API.</p>
</li>
<li><p>Once enabled, cannot be completely disabled, the alternative is placing the bucket in a "versioning-suspended" state.</p>
</li>
<li><p>A drawback of having multiple versions of an object is you are billed multiple times (since the objects are getting stored in S3 each time).</p>
</li>
<li><p>In order to avoid having multiple versions of the same object, S3 has a feature called Lifecycle Management. This allows us to decide on what to do when multiple versions of an object are piling up.</p>
</li>
<li><p>One advantage of versioning is, we can provide permissions on versioned objects, i.e., we can define which version of an object is public and which one is private.</p>
</li>
</ul>
<h2 id="heading-preparation">Preparation</h2>
<ul>
<li><p>Create an AWS Account: If you don't already have an AWS account, you'll need to create one. Go to the AWS website and follow the sign-up process.</p>
</li>
<li><p>Access Key and Secret Access Key: To interact with AWS services programmatically or through the AWS CLI, you'll need to generate access keys. These keys consist of an Access Key ID and a Secret Access Key. They are essential for authenticating your AWS requests.</p>
</li>
<li><p>Configure AWS CLI: If you plan to use the AWS Command Line Interface (CLI), you'll need to install it on your local machine. Once installed, configure the CLI with your access key and region. This step is necessary for performing actions on your AWS resources, including S3 buckets.</p>
</li>
<li><p>Select the Target S3 Bucket: Decide which S3 bucket you want to enable versioning for. Ensure you have the necessary permissions to modify the bucket settings.</p>
</li>
</ul>
<h2 id="heading-practical-example">Practical example</h2>
<p>In the following steps, we will guide you through the process of enabling versioning in Amazon S3:</p>
<ul>
<li>Create new bucket</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695833958572/dec4c17d-e531-43d2-ae9f-76a5dad593f4.avif" alt class="image--center mx-auto" /></p>
<ul>
<li>Enable versioning</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695833965173/271bb463-47b0-431b-81a8-6000578b8348.avif" alt class="image--center mx-auto" /></p>
<ul>
<li>In order to test it, upload a new object. (For the purpose of this tutorial we have used a fancy orc avatar):</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695834022636/911960a9-3d6b-47ff-b0df-da48227a682a.avif" alt class="image--center mx-auto" /></p>
<ul>
<li>Make the bucket public with a Bucket policy, use the Policy generator and grant public access to getObject in S3:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695834129651/a7c780a1-f8f5-435f-8754-7fce5193446d.avif" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695834136850/29f4be32-3472-47b9-a1bc-ec66f92ddbe2.avif" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695834147553/9eeab625-a54e-49c5-a2f3-f6f5f0e20824.avif" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695834155241/341c2582-15e7-4e8c-9b0b-d2a9f7fe14d9.avif" alt class="image--center mx-auto" /></p>
<ul>
<li>Finally, we upload the same object but with different color balance to differentiate clearly the versions:</li>
</ul>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695834233875/b9faeee1-4948-47d7-b856-669f04bf600c.avif" alt class="image--center mx-auto" /></p>
<p><img src="https://cdn.hashnode.com/res/hashnode/image/upload/v1695834298901/fe8f169c-2c50-49b2-a672-92e4094240e1.avif" alt class="image--center mx-auto" /></p>
<h2 id="heading-conclusion">Conclusion</h2>
<p>In conclusion, enabling versioning in Amazon S3 is a fundamental step towards enhancing the resilience, security, and control of your data in the cloud. By following the steps outlined in this guide, you've empowered yourself with a powerful tool for data preservation, recovery, and compliance. Versioning ensures that no data is lost to accidental deletions, provides a historical record of changes, and safeguards your critical information.</p>
]]></content:encoded></item><item><title><![CDATA[Migration patterns for Serverless Applications]]></title><description><![CDATA[Introduction
Serverless frameworks are gaining popularity for their cost-effectiveness, scalability, and rapid development capabilities and increasingly, many companies consider that the future of computing is serverless.
A serverless architecture re...]]></description><link>https://amatore.dev/migration-patterns-for-serverless-applications</link><guid isPermaLink="true">https://amatore.dev/migration-patterns-for-serverless-applications</guid><category><![CDATA[serverless]]></category><category><![CDATA[migration]]></category><category><![CDATA[Cloud Computing]]></category><dc:creator><![CDATA[Amador Criado]]></dc:creator><pubDate>Wed, 27 Sep 2023 16:26:00 GMT</pubDate><enclosure url="https://cdn.hashnode.com/res/hashnode/image/upload/v1695831926466/51e547ff-e7b9-4a39-8ced-534ee4de3d36.png" length="0" type="image/jpeg"/><content:encoded><![CDATA[<h2 id="heading-introduction">Introduction</h2>
<p>Serverless frameworks are gaining popularity for their cost-effectiveness, scalability, and rapid development capabilities and increasingly, many companies consider that the future of computing is serverless.</p>
<p>A serverless architecture represents an approach to creating and operating applications and services without the need for direct infrastructure management. Although serverless applications are ultimately executed on servers, the responsibility for overseeing these servers is shifted to a particular service provider, such as Amazon Web Services (AWS) utilizing 'lambda functions,' or Google Cloud Platform through 'Cloud functions.'</p>
<p>Effective migration is crucial to unlock the advantages of serverless architecture. Migrating existing applications or developing new ones in a serverless environment offers benefits like scalability, cost-efficiency, and simplified management. However, without a well-planned and executed migration strategy, organizations may struggle to realize these advantages. Effective migration ensures a smooth transition, minimizes disruption, and maximizes the potential of serverless computing for agility, cost savings, and improved performance.</p>
<p>The main goal of this article is to present the most relevant migration strategies to serverless applications. It's crucial to understand all the migration strategies, and there are several considerations to take into account and questions to answer before deciding which strategy best fits the migration.</p>
<p>Notice that the migration patterns presented in this article are common patterns and should be considered a good starting point for any migration because they cover the basis for many serverless applications.</p>
<h2 id="heading-serverless-fundamentals">Serverless Fundamentals</h2>
<p>In order to provide more context for the article, let's dig more into the Key advantages of serverless applications, and some common use cases:</p>
<h4 id="heading-what-does-serverless-exactly-means">What does 'Serverless' exactly means?</h4>
<ul>
<li><p>No Server Management</p>
</li>
<li><p>Flexible Scaling</p>
</li>
<li><p>Automated high availability</p>
</li>
<li><p>No idle capacity</p>
</li>
</ul>
<h4 id="heading-top-five-use-cases-for-serverless-computing">Top Five use cases for Serverless Computing</h4>
<ul>
<li><p>Media Processing</p>
</li>
<li><p>Event-Driven Applications</p>
</li>
<li><p>Building APIs</p>
</li>
<li><p>Chatbots</p>
</li>
<li><p>Webhooks</p>
</li>
</ul>
<h2 id="heading-migration-challenges">Migration Challenges</h2>
<p>While serverless computing has many benefits, it also has some challenges that must be acknowledged or addressed before you can be successful. The purpose of this article is to present them briefly.It's important to understand that those considerations are not only related to Architecture problems, security and compliance are also important topics.</p>
<ul>
<li><p><strong>Cold Start Latency:</strong> Serverless functions experience a delay when they're invoked for the first time, known as "cold start latency."</p>
</li>
<li><p><strong>Vendor Lock-In:</strong> Each Serverless providers offer their unique services and tools Transitioning away from a specific provider can be difficult and costly.</p>
</li>
<li><p><strong>Legacy Systems Integration:</strong> Migrate from traditional systems to serverless may be required to integrate with legacy systems. That adds a customization layer with increasing complexity and difficult maintainability.</p>
</li>
<li><p><strong>Resource Limitations:</strong> The different Serverless platforms, impose resource limits, such as execution time and memory. In some cases, the limitations are applied to different Regions. High-Scale projects could be affected in terms of performance.</p>
</li>
<li><p><strong>Security Concerns:</strong> Ensure the security of data in a serverless environment is a big challenge. Developers must carefully configure security settings and access controls to prevent unauthorized access and data breaches.</p>
</li>
<li><p><strong>Cost Management:</strong> Misconfigured functions or excessive usage can lead to unexpected expenses.</p>
</li>
<li><p><strong>Compliance and Data Residency:</strong> In regulated industries, can be complex when data is distributed across serverless functions and services.</p>
</li>
<li><p><strong>Monitoring and Debugging:</strong> Identifying issues and performance bottlenecks may require specialized tools.</p>
</li>
</ul>
<h2 id="heading-the-migration-process">The Migration process</h2>
<h4 id="heading-initial-approach"><strong>Initial approach</strong></h4>
<p>3 preliminary questions should be done prior to accomplishing the migration and decide the strategy:</p>
<ul>
<li><p>How is implemented the computing infrastructure?</p>
</li>
<li><p>How is approached application development?</p>
</li>
<li><p>How is approached application deployment?</p>
</li>
</ul>
<p>Answer completely or partially these 3 paradigms is required to decide the strategy. At least all 3 points should be considered and evaluated accordingly.</p>
<h4 id="heading-infrastructure-abstraction"><strong>Infrastructure abstraction</strong></h4>
<p>Understand the main infrastructure abstraction and architecture modernization allow us to group the different strategies as well, and can answer part of the questions introduced in the initial approach. For the scope of this project there are 3 abstractions considered :</p>
<p><strong>Server based</strong></p>
<p>Definition:</p>
<p>Also known as a server-centric infrastructure, is an IT architecture where a central server or multiple servers play a crucial role in providing services, managing resources, and storing data for clients or end-users.</p>
<p>Considerations:</p>
<ul>
<li><p>monolithic</p>
</li>
<li><p>Single Artifact releases</p>
</li>
<li><p>Usually have some manual deployments</p>
</li>
<li><p>Single Technology stack</p>
</li>
<li><p>Minimal impact moving application to the cloud</p>
</li>
</ul>
<p><strong>Containerized</strong></p>
<p>Definition:</p>
<p>Containers are lightweight, portable, and isolated environments that encapsulate an application and all the libraries, runtime components, and configuration settings it needs to run. This approach offers several advantages in terms of efficiency, scalability, and ease of management</p>
<p>Considerations:</p>
<ul>
<li><p>Platform independence</p>
</li>
<li><p>Environment parity</p>
</li>
<li><p>Straightforward deployments</p>
</li>
<li><p>Portability</p>
</li>
<li><p>Security policies of container images and runtime</p>
</li>
</ul>
<p><strong>APIs and microservices</strong></p>
<p><strong>Definition</strong>:</p>
<p>APIs and microservices are two interconnected concepts that play a crucial role in modern software development and architecture. They are often used together to build scalable, flexible, and maintainable applications. APIs are sets of rules and protocols that allow one software application or component to interact with another, and the microservices structures the application into those components, being a collection of small, independent services.</p>
<p><strong>Considerations</strong>:</p>
<ul>
<li><p>Event-driven microservices</p>
</li>
<li><p>CI/CD with polyglot technology stacks</p>
</li>
<li><p>Frequent releases.</p>
</li>
<li><p>Applications need to be rewritten.</p>
</li>
</ul>
<h2 id="heading-migration-patterns">Migration patterns</h2>
<p><strong>Leapfrog</strong></p>
<p>As the name suggests, with the leapfrog pattern, you bypass interim steps and go straight from an on-premises legacy architecture to a serverless cloud architecture.</p>
<p><strong>Example Scenario: Image Processing</strong></p>
<p><strong>Traditional Setup:</strong></p>
<ul>
<li><p>You have a web app that processes images on a dedicated server.</p>
</li>
<li><p>Server maintenance, scaling, and cost management are challenges.</p>
</li>
</ul>
<p><strong>Serverless Migration:</strong></p>
<ul>
<li><p>Rewrite the image processing code as a serverless function (e.g., AWS Lambda).</p>
</li>
<li><p>Configure AWS API Gateway to expose an HTTP endpoint that triggers the "UploadImage" Lambda function to S3.</p>
</li>
<li><p>Trigger this function whenever an image is uploaded to an S3 bucket.</p>
</li>
</ul>
<p><strong>Lift and Shift</strong></p>
<p>In this model, existing applications are kept intact initially.</p>
<p>Developers experiment with Serverless tools, like for instance, Cloud functions, in low-risk internal scenarios such as log processing or scheduled tasks. Progressively, other serverless components are adopted for tasks such as data transformations and parallelization of processes.</p>
<p>At a certain stage in the adoption process, it's recommended to take a strategic look at how more serverless and microservices infrastructure might address different business goals.</p>
<p>Then, create a production workload as a pilot, and with initial success and lessons learned in the small processes adopted to serverless previously, more applications could be migrated incrementally to serverless.</p>
<p><strong>Example Scenario: Content Management System (CMS) Migration</strong></p>
<p><strong>Traditional Setup:</strong></p>
<ul>
<li><p>You have a traditional CMS running on a virtual server or on-premises infrastructure.</p>
</li>
<li><p>The CMS serves content, manages user accounts, and handles user-generated content.</p>
</li>
</ul>
<p><strong>Lift and Shift Migration to Serverless:</strong></p>
<ul>
<li><p>Identify a serverless platform, such as a fully managed cloud service like AWS Amplify or Firebase.</p>
</li>
<li><p>Create a serverless application using this platform.</p>
</li>
<li><p>Replicate the functionality of your existing CMS within this serverless environment.</p>
</li>
<li><p>Migrate your content, user data, and user-generated content to the new serverless application.</p>
</li>
<li><p>Adjust DNS settings or routing to direct traffic to the new serverless application.</p>
</li>
<li><p>Perform testing to ensure the new serverless CMS functions correctly and serves content as expected.</p>
</li>
</ul>
<p><strong>Strangler</strong></p>
<p>With the strangler pattern, an organization incrementally and systematically decomposes monolithic applications by creating APIs and building event-driven components that gradually replace components of the legacy application.</p>
<p>Distinct API endpoints can point to old compared to new components and safe deployment options (such as canary deployments) let you point back to the legacy version with very little risk.</p>
<p>New feature branches can be serverless first, and legacy components can be decommissioned as they are replaced.</p>
<p><strong>Example Scenario: Legacy eCommerce</strong></p>
<p><strong>Traditional Setup:</strong></p>
<ul>
<li><p>You have a legacy eCommerce application running on a monolithic server.</p>
</li>
<li><p>It's challenging to maintain and update the old codebase.</p>
</li>
</ul>
<p><strong>Strangler Migration to Serverless:</strong></p>
<ul>
<li><p>Start by identifying a specific function, like product image resizing, within the monolithic app.</p>
</li>
<li><p>Rewrite this function as a serverless microservice (e.g., AWS Lambda).</p>
</li>
<li><p>Expose this microservice via an API Gateway endpoint.</p>
</li>
<li><p>Gradually, route new product image resizing requests to the serverless microservice while the rest of the eCommerce app remains on the monolithic server.</p>
</li>
<li><p>Over time, refactor and migrate other functions in a similar manner.</p>
</li>
<li><p>Eventually, the entire application is decomposed into serverless microservices.</p>
</li>
<li><p>Benefits: Incremental modernization, reduced risk, and improved agility without a complete system rewrite.</p>
</li>
</ul>
<h2 id="heading-top-migration-questions-you-need-to-answer">Top migration questions you need to answer:</h2>
<p>Despite the most common migration pattern for moving complex applications is the strangler pattern, where you refactor and rewrite parts of your application with serverless, in many cases, move to serverless coincide with decompose monolith in order to implement Event-Driven microservices.</p>
<p>For the scope of this post, is interesting to give some examples of questions that require to be answered:</p>
<ul>
<li><p>What does this application do and how are its components organized?</p>
</li>
<li><p>How does the application scale and what components drive the capacity you need?</p>
</li>
<li><p>Do you have schedule-based tasks?</p>
</li>
<li><p>Do you have workers listening to a queue?</p>
</li>
<li><p>Where can you refactor or enhance functionality without impacting the current implementation?</p>
</li>
<li><p>What is the infrastructure cost/budget to run your workload?</p>
</li>
<li><p>What will be the cost/investment of your team’s time to maintain the application once it is in production.</p>
</li>
</ul>
]]></content:encoded></item></channel></rss>