Saturday, 3 January 2026

The Ten Commandments of Hinduism



The 10 commandments of Hinduism as follows: 

paropakaara punyaaya
paapaaya para peedanam

Hinduism is a practice, the Berock of whose culture is all about giving. Its a Dhaanam culture, its a Tyaagam culture. Punya is paropakaaram (helping others), Paapam is parapeedanam (inflicting others).  

The first five are called "Yamah" and the second five are called "Niyamah". The Yamah together with the Niyamah form the ten commandments which will ensure leading a moral life in Hinduism.

Do Nots:

1) Himsaa varjanam.– varjanam means avoidance. Himsaa means violence. Avoidance of all forms of violence. Physical verbal and mental violence towards others including  krodha (anger/revenge),  matsarya (jealousy). Do praayaschitham for unavoidable violence (example is you are fighting for a country in a war by profession). Praayaschitham is through "Pancha Maha Yagnya" Here yagnya denotes duties towards Deva (almighty), Pithru (ancestors), Manushya (fellow human beings), Bhoota (animals & plants) and Brahma (gaining knowledge through systematic study of scriptures).

2) Asathya varjanam - avoid all avoidable lies, as this a paapam. Unavoidable lies for general welfare should be followed by praayaschitham which is again through pancha maha yagnya.

3) Stheyavarjanam - stheya varjanam. In simple language it is astheyam – which means avoidance of stealing. So stealing means not just burglary, any illegitimate possession comes under stealing.

4) Maithuna varjanam – maithunam means inappropriate sexual relationships, avoidance in thoughts, words and deeds.

5) Parigraha varjanam - Avoidance of over possession. hoarding, amassing, etc. To put it in positive language, simple living. To the extent possible, simple living and  sharing with others. Avoidance of kama (lust/desire), lobha (greed), moha (delusion/attachment) and mada (pride/intoxication).

These are the five avoidances - himsa varjanam, asathya varjanam, stheya varjanam, maithuna varjanam and parigraha varjanam.

Dos:

Then there are five positive things to be followed in the Hinduism practices.

6) Shaucham - Shaucham means purity both outside and inside purity. Inside in terms of body, mind and thoughts.

7) Santosha - Positive contentment with whatever I acquire through legitimate methods. Positive contentment and being happy. "Yallabhase nija kamo paatham vitham thena vinodhaya chitham". 

8) Tapas  – tapas means any self denial practiced for mastery over one’s own instruments. Self denial – like fasting, mounam etc.any vow taken, in which I deny myself certain comforts for self mastery. Tapas is austerity or self denial. 

9) Swaadhyaaya– scriptural study is very important, as it refines ones mind and intellect. Gets the focus on the right things that matter spiritually. Scriptural study is an important discipline or value called swaadhyaayaand.

10) Eshwara prannidhaanam - this means surrender to the Lord by which accepting every experience as a karma phalam which is coming as a gift from God. Having patience and courage to accept every experience and not allowing the experience to generate negative emotion. 

                                                   *   *   *   *  *   *   *   *   *   *

Monday, 29 December 2025

Implementing GenAI Cost Optimization strategies

 


Implementing GenAI Cost Optimization strategies

As organizations increasingly adopt generative AI capabilities into their operational workflows, managing and optimizing inference costs becomes as crucial as traditional cloud cost management. 

P95 latency per dollar spent is an important metric for evaluating the latency-cost tradeoffs for GenAI applications. P95 latency (the 95th percentile of response times) per dollar spent, provides a direct measure of the value received in terms of performance relative to cost for GenAI applications.

This blog explores practical approaches to balance cost considerations with performance requirements to maintain P95 latency per dollar when deploying GenAI Solutions.

Developing token efficiency systems while maintaining effectiveness.

    • Develop token efficiency systems by using token estimation and tracking, context window optimization, response size controls, prompt compression, context pruning, and response limiting to reduce foundation model costs while maintaining effectiveness. 
    • Implement the token counting capabilities to accurately estimate and track token usage before making API calls, allowing for better cost prediction and optimization. Using model-specific tokenizers is an effective approach for accurately estimating token counts. Different foundation models use different tokenization algorithms, so using a specific tokenizer for your chosen model ensures that your token estimates closely match how the model will tokenize your input, enabling more precise cost estimation and context window management. 

Create cost-effective model selection frameworks. 

·       It is critical to understand and create cost-effective model selection frameworks by using cost capability tradeoff evaluation, tiered foundation model usage based on query complexity and response quality.  One such example is right sizing the model/s selection and implementing tiered model usage based on query complexity.  By implementing logic so that simple queries route to smaller, less expensive models that can adequately handle straightforward requests, medium complexity queries direct to mid-tier models that balance cost and capability, and complex queries are served by the most powerful but expensive models, costs can be optimized.

Develop high-performing GenAI systems that maximize resource utilization and throughput for workloads.

    • Develop high-performance foundation model systems by using batching strategies, capacity planning, utilization monitoring, auto-scaling configurations, and provisioned throughput optimization to maximize resource utilization and throughput for GenAI workloads. 
    • For workloads that don't require real-time inference responses, batch processing can reduce foundation model costs while maintaining output quality. For example, pre-generating product descriptions in nightly batch jobs rather than generating them on-demand.

Create intelligent caching systems to reduce costs and improve response times.

    • Create intelligent caching systems such as semantic caching, result fingerprinting, edge caching, deterministic request hashing, and prompt caching to improve response times and avoid unnecessary foundation model invocations.

You may also try other techniques such as implementing recursive summarization techniques to compress long documents while preserving key information before submitting inputs to foundation models, using prompt templates  that prioritize the most relevant information at the beginning to ensure critical content is processed even with truncation, using context pruning algorithms to compress prompts while maintaining their effectiveness, and using response size control mechanisms that limit output token generation while maintaining answer quality.

Cost optimization is an ongoing process that should evolve with your GenAI application's needs and usage patterns. By regularly monitoring model performance and inference costs against established metrics, and by keeping a tab on future model releases and pricing changes -  you can optimize costs while maintaining the effectiveness of your GenAI investments.


Saturday, 21 June 2025

Strands Agents – An Open-source python SDK for building agents





Strands Agents – An Open-source python SDK for building agents

According to Gartner, over a third of all enterprise apps will be powered by Agentic AI by 2028.  This evolution isn’t a roadmap of the future. It’s already happening today.

There are many ways of building agents on the AWS platform. The key proposition is to meet the customer, wherever they are in their agentic AI journey, whether through out-of-the-box agents, custom development or options to build DIY agents, or a combination of these.

·     You can use out-of-the-box Specialized Agents with Amazon Q – customers who are looking to immediately deploy agentic experiences with minimal technical overhead, Amazon Q Business and Amazon Q Developer allows you to immediately test and deploy agentic AI or further customize to meet specific needs of your business.

·   Or build your Agentic AI application with Gaurdrails and fully managed for you using Amazon Bedrock. Fully Managed agents that you can build, that integrates with your systems and data, and tools, giving you the flexibility to test different foundation models in a secure managed environment, with a comprehensive toolset to build, deploy, operate, maintain, and scale trusted, high-performing AI agents in Amazon Bedrock.

·      Or DIY agents by providing a model-driven approach to building AI agents in just a few lines of code, using Strands Agents.

 

What are Strands Agents?


Strands Agents is an Open-source python SDK for building agents using just a few lines of code. It takes a model-driven approach and uses the automated reasoning capabilities of models to build agents. It allows the agent to perform complex, multistep reasoning and actions, and is built for developers by developers, and open-sourced by AWS.  

·     Strands simplify agent development by embracing the capabilities of models to plan, chain thoughts, call tools, and reflect. Like the two strands of DNA, Strands connects two core pieces of the agent together: the model and the tools.

·       Get started quickly: With Strands, developers can simply define a prompt and a list of tools in code to build an agent, then test it locally and deploy it to the cloud.

·     Model driven approach: Strands plans the agent's next steps and executes tools using the advanced reasoning capabilities of models.

·     Highly flexible: For more complex agent use cases, developers can customize their agent's behavior in Strands.

·       Model Agnostic: Strands can run anywhere and can support any model with reasoning and tool use capabilities, including models in Amazon Bedrock, Anthropic, Ollama, and other providers through LiteLLM.

·   Deploy anywhere: Deploy and run agents in any environment where you run Python applications and deploy on ECS, Lambda, and EC2.

·       Built-in MCP: Native support for Model Context Protocol (MCP) servers, enabling access to thousands of pre-built tools. Strands also provides a natively a number of pre-built tools, examples: image_reader to process and analyze images,  use_aws to interact with AWS services and  http_request to make API calls, fetch web data, and call local HTTP servers.

Core Working Principle: At the heart of Strands' capabilities lies the agentic loop, a continuous cycle where an agent interacts with its model and tools to accomplish a task prompted by the user. This loop leverages the remarkable advancements of LLMs), which can now reason, plan, and select tools with native proficiency.

In each iteration of the loop, Strands engages the LLM with the user's prompt, agent context, and a description of the available tools. The LLM can respond in various ways, including in natural language for the end user, outlining a series of steps, reflecting on previous actions, or selecting one or more tools to utilize. When the LLM chooses a tool, Strands seamlessly executes it and returns the result to the LLM. Once the task is complete, Strands delivers the agent's final outcome.

 


Join the Strands Agents community

Strands Agents is an open-source project licensed under the Apache License 2.0. Contributions are welcome to the project, where developers can add support for additional models and tools, they can collaborate on new features or expand the documentation. If they find a bug,  or have a suggestion, or have something to contribute, they can join the project on GitHub.


Wednesday, 2 April 2025

Understanding Agentic AI through MCP (Model Context Protocol)


                                                 art by: J. Sridharan, Dubai

Agentic AI Orchestrator Protocols in Simple Terms

Earlier in Nov 2024, Anthropic open sourced MCP (Model Context Protocol) - an open standard that enables developers to build secure, two-way connections between their GenAI applications and the tools and data sources. MCP is an open-source protocol that simplifies connections between AI systems and various data sources to help deliver faster innovation in context-aware Agentic AI applications. 

What are Agentic AI Orchestrators

Agentic AI is seen as the second big evolution of GenAI, and Agentic AI orchestrators are seen to be the enablers of this evolution. As LLMs continue to evolve providing multi-modal and extended built-in capabilities - in bare terms, LLMs are good in the predictability of the next word, which makes them great poem reciters and essay writers or converting text to visuals or translating languages and many more standalone tasks.  But there is a fundamental limitation – LLMs cannot take any intelligent action unless they are integrated with tools and data sources – for example: if you ask an LLM any information that it was not trained on (for example: current stock market trends) - unless it is connected to a web search engine – it cannot provide you with an accurate answer.

 In the GenAI application space, in terms of standardizing the communication protocols we are still in an era that can be compared to the pre-Rest API era - Just like how RESTful APIs accomplished simplified, standardized communication between client and server applications, there is a significant opportunity to standardize communication protocol between LLMs, tools and data sources. Consider a simple GenAI application built with a single model (a single Agentic AI application) – let’s say - a personal travel assistant – helping you to not only plan but do the bookings for a holiday - this agent ideally must fetch details from multiple sources to fulfil the task – Google maps to determine the place of interest, an OTA such as Expedia and other providers such as Booking.com, execute your credit card etc. Without a standardized way of connecting to the tools, building GenAI applications though not impossible, is very engineering intensive.

 In simple terms, building a GenAI Agentic application has 4 components (in short TATA)

  • Task to accomplish,
  • The model/s or Agent/s,
  • Tools it needs to accomplish the task
  • Answer the agent provides.

Without standardized protocols, the following are few of the key challenges to accomplish TATA.

  • Custom built implementations required significant engineering effort to plumb tools and data sources. In addition, consider the re-engineering efforts when sources change.
  • When connecting multiple agents, inconsistent prompt logic with different methods for accessing and federating tools and data will provide inefficient answers.
  • The Scale problem - "n times m problem" - where a a large number of client applications interacting with a mesh of servers and tools will result in a complex web of integrations, duplicity, each requiring specific integration efforts.

MCP allows AI Agents to use tooling, resources and even prompt libraries in a standardized manner, thus extending the Agentic AI capabilities significantly to build more meaningful GenAI applications.

Just to keep the MCP architecture simple, MCP uses a client-server architecture, primarily at a high level, the key components being an MCP client, the MCP server and the MCP communication protocol. Developers expose their data through lightweight MCP servers. For example, Anthropic has released a few popular MCP server codes already such as for Google maps, or Slack. By connecting to these MCP servers, you can easily build an Agentic AI MCP client following the MCP protocols.

MCP Architecture

MCP uses a client-server architecture that contains the following components and is shown in the following figure:

  • Host: An MCP host is a program or AI tool that requires access to data through the MCP protocol, such as Claude Desktop, an integrated development environment (IDE), or any other AI application.
  • Client: Protocol clients that maintain one-to-one connections with servers.
  • Server: Lightweight programs that expose capabilities through standardized MCP, allows access to data sources tools and even prompt libraries.
  •  Local data sources: Your databases, local data sources, and services that MCP servers can securely access.
  • Remote services: External systems available over the internet through APIs that MCP servers can connect to.

MCP, thus by providing an open-source protocol and a universal standard that simplifies connections between AI systems and various data sources - will deliver agility in building efficient and context-aware AI applications. Consequently, this will enable AI agents to autonomously perform complex tasks.

The success and widespread adoption of protocols like MCP depends upon industry participation and standardization efforts on interoperability and portability, and adherence to common standards, allowing AI applications to operate across different platforms and jurisdictions, crucial for global companies and responsible AI. 

MCP will help build trust by ensuring AI systems are transparent, reliable and secure. The clarity provided by the MCP protocol guidelines will reduce compliance complexity, will lower barriers to innovation and will foster faster development of AI products. 

 

Tuesday, 24 September 2024

Key differences between a Transformer Architecture and a State space model Architecture for Building LLMs

 



A transformer architecture primarily focuses on capturing local relationships within a sequence by using attention mechanisms, while a state space model architecture is designed to model the evolution of a system over time by maintaining a fixed-size "state" that represents the current system status, making it more efficient for handling long sequences but potentially limiting its ability to capture fine-grained details within the data; essentially, transformers excel at short-range dependencies while state space models prioritize long-range dependencies and overall system dynamics. 

Key differences: 

Attention mechanism:

Transformers heavily rely on attention mechanisms to weigh the importance of different parts of an input sequence when generating the output, allowing for flexible context understanding. State space models typically do not use attention mechanisms in the same way. 

State representation:

In a transformer, the "state" is essentially the current hidden representation at each layer, which can dynamically change with the sequence length. In a state space model, the "state" is a fixed-size vector representing the system's current status, which is updated based on input and system dynamics. 

Handling long sequences:

Transformers can struggle with very long sequences due to quadratic computational complexity, while state space models are generally better suited for handling long sequences because of their fixed-size state representation. 

Applications:

Transformers are widely used in natural language processing tasks like machine translation, text summarization, and question answering due to their ability to capture complex relationships between words. State space models are often applied in areas like time series forecasting, control systems, and scenarios where tracking the evolution of a system over time is crucial. 

Recent developments: 

Mamba Model: Researchers have developed architectures like "Mamba" which attempt to combine the strengths of transformers and state space models, leveraging attention mechanisms while still maintaining a fixed-size state to handle long sequences more efficiently. 

The Ten Commandments of Hinduism

The 10 commandments of Hinduism as follows:  paropakaara punyaaya paapaaya para peedanam Hinduism is a practice, the Berock of whose culture...