Why Adaptability Is Your Best Backup Plan in the Age of AI – Part Two
- 3 days ago
- 10 min read
Written by Smrita Jain, Creative and Experience Director
Smrita Jain is an award-winning Creative and Experience Director, Senior Product Designer, Agentic AI Consultant, and founder of The Aquario Group, a Brooklyn-based design and experience consultancy focused on strategy, UX, product design, brand storytelling, digital transformation, and business growth.
In part 1 of this article, I explored the importance of adaptability, sharing how I navigated challenges like economic collapse, career reinvention, and adapting to AI in an evolving work landscape. In this second part, we delve deeper into the concept of artificial empathy and its implications, discussing how AI can simulate human emotions but still lacks the true depth of human connection. As we embrace AI’s potential, we must recognize where human judgment, creativity, and empathy are irreplaceable. Continue reading to discover practical ways to incorporate AI without losing your unique human touch.

We have already been adapting
The truth is that we have all been highly adaptable in our lives. We just fail to recognize it. We adapted to the COVID-19 isolated environment. Some people hated it, and some people felt comfortable with it, but all of us had to adjust in some way. We learned new tools, new routines, new ways of communicating, and new ways of working. We joined meetings from bedrooms and kitchen tables. We collaborated without being in the same room. We learned how to manage digital workflows, remote relationships, and a new kind of work-life rhythm. We managed a lot.
Where humans struggled, though, was in creating and maintaining relationships and that is something AI cannot truly do.
AI can generate a response that sounds empathetic, but it does not feel empathy. It does not know what it means to lose work, rebuild confidence, move homes, care for someone, miss someone, watch someone die, or start again when life collapses.
Even when AI sounds human, the empathy it produces comes from knowledge, language patterns, and information that humans created. Isn’t that fascinating? So who has the power? If humans have the power to create AI, then we should also carry the power to remove it, destroy it, or carry it over into areas where there is a need.
When artificial empathy becomes the new normal
One of the things I think about often is the difference between real empathy and artificial empathy. Agentic AI is being designed to personalize experiences, understand patterns, respond to needs, and even sound emotionally aware. In customer service, healthcare, education, sales, and product experiences, AI will continue to create responses that feel personal. It may remember preferences, predict needs, and respond in a tone that feels warm and human.
But at some point, we also must be honest about the reality we are creating. When everything starts to feel personalized, automated, and emotionally responsive, artificiality may become the new normal, artificial empathy, artificial responses, artificial emotions, and artificial relationships with systems that sound human but do not actually feel anything.
That does not mean AI has no value. It has enormous value. But it does mean humans need to understand the difference between a system that can simulate empathy and a person who can feel it.
AI can respond to sadness, urgency, frustration, or confusion. But it does not know what it means to sit in silence after bad news, to help someone through fear, or to make a moral decision in a moment of crisis. That is where human experience still matters.
The human checkpoint: 4 questions I ask before trusting AI
When I work with AI outputs, I try not to ask only, “Is this fast?” I try to ask better questions.
What context is missing? AI may respond quickly, but it may not understand the full emotional, cultural, business, or personal context behind a situation. A response can be accurate and still not be right for the moment.
Does it sound helpful, or only polished? This is important because AI can sound confident even when it is missing the deeper need. Sometimes, a polished response hides the fact that the answer is too generic, too flat, or too removed from the real human situation.
Who is affected if this is wrong? The more sensitive the situation, the more human review matters. Anything connected to safety, health, money, legal decisions, relationships, crisis response, hiring, or emotional distress should not be treated casually.
Where does the human need to step back in? Every AI workflow should have a clear place for human judgment. Someone needs to evaluate, approve, redirect, or stop the system when needed. That is not slowing AI down. That is making AI more responsible.
Thinking like a human, writing for the machine
As I work more deeply with agentic AI workflows, I have started to see that AI is not just about prompts or automation. It is about designing how a system thinks, how it reasons, how it responds, and how it knows when to stop.
McKinsey’s State of AI 2025 reported that 23% of organizations are already scaling agentic AI systems somewhere in their enterprise, while another 39% have begun experimenting with AI agents. McKinsey describes AI agents as systems based on foundation models that can act in the real world, plan, and execute multiple steps in a workflow. For me, this is where product design, UX thinking, and AI begin to merge.
To design for agentic AI, you still need to think like a human but speak like a machine because you are talking to a machine. Working with and creating machine-learning language and UX flows requires thinking ahead and the proactiveness of a human mind. You need to understand hesitation, urgency, motivation, failure, trust, and decision-making. But you also need to write for the machine to learn. You need to define inputs, outputs, reasoning steps, escalation points, and what a good decision should look like.
You are not just designing screens anymore. You are designing behavior. This is the mindset I used when creating a few lightweight AI agent concepts.
I designed a Growth Funnel Analyzer Agent to help teams spot where growth is leaking and what to fix first. It reads a simple spreadsheet export of funnel steps by channel and week, then identifies the biggest drop-offs, compares conversion performance and Customer Acquisition Cost, and summarizes trends over time. The agent outputs a prioritized action plan and experiment roadmap, turning raw data into clear decisions for acquisition, activation, and revenue growth.
I also designed a Support Ticket to Product Roadmap Agent to help turn customer support tickets into clear product priorities. It reads a spreadsheet export from tools such as Zendesk or Intercom, clusters tickets into themes, and highlights the highest-impact issues based on volume, severity, and frequency of recurrence. The agent surfaces root causes, recommends UX and workflow fixes, and outputs a prioritized roadmap with quick wins and longer-term bets, helping teams reduce support load, improve satisfaction, and focus product work on what customers need most.
The third concept is a Next Best Action CRM Agent, designed to help sales and growth teams focus on the right leads at the right time. It reads a simple CRM export in a spreadsheet, including lead stage, last touch date, source, engagement signals, and deal value, then assigns a priority score and recommends the next best step for each lead, whether that is follow-up, nurture, booking a call, or disqualification. The agent also generates tailored outreach snippets, turning a static pipeline into an actionable daily plan that improves speed-to-lead, consistency, and conversion.
For me, these agents are not about replacing human thinking or humans themselves. They are about helping teams move from scattered information to better decisions. The human still defines the problem. The human still understands the context. The human still decides what matters. The agent helps turn scattered inputs into something a human can actually judge.
My agentic AI rule: Design for decisions, not just automation
When I think about AI agents, I do not only think about tasks. I think about decisions. A strong AI agent should help answer questions like:
What needs attention first?
What pattern are we missing?
What issue keeps repeating?
What should be escalated to a human?
What decision can be made now, and what still needs more context?
That is where agentic AI becomes more useful. It is not just completing a task. It is helping people see what matters, prioritize better, and act with more clarity.
That is also why I believe designers have an important role in the AI era. We understand flows, behaviors, friction, trust, and decision points. Those skills matter when we are designing not only what a user sees, but how an intelligent system behaves behind the scenes.
The manual button still matters
Even if AI takes over more parts of our world, even if we face fake videos, fake human interactions in customer service, automated conversations, and tools that seem to do everything for us, we will still need human judgment.
Because on the days when Cloudflare breaks down, creation tools stop working, automations fail, systems glitch, or a city loses power during a storm, humans will still look for the “manual” button and someone will need to know what to do.
Technology can optimize perfect conditions. Humans adapt in imperfect conditions.
A self-driving car may follow a route, but can it stop in the middle of a flood, notice a dying cat or dog, and make the emotional decision to save them? Maybe one day technology will become more advanced, but the instinct to protect life, help a stranger, or make a judgment call in chaos still belongs deeply to human beings.
We see videos all the time of people running into danger, helping strangers, saving animals, carrying neighbors, protecting children, or simply doing the right thing in the middle of chaos. That is not automation. That is humanity.
This is also why responsible AI cannot be only about speed and efficiency. The National Institute of Standards and Technology’s AI Risk Management Framework focuses on better managing AI risks to individuals, organizations, and society. To me, that is another reminder that humans still need to guide, question, evaluate, and intervene. The future is not only about what AI can do. It is also about what humans should allow it to do.
Why handmade work may become more valuable
This is one of the reasons I am rebuilding my design agency in this heavy digital AI world, while also building an e-commerce brand rooted in art, handmade intention, and original work. For me, these two things are connected.
My agency represents strategy, UX, digital products, and AI-enabled thinking. My e-commerce work represents the other side of the same belief, that human creativity, handwork, and original art will become more valuable in a world where so much can be generated, copied, automated, and repeated.
In the world of AI, I believe the things that were once made by hand will always hold strong value, but only if we as humans understand what AI cannot truly achieve.
AI can generate a painting-inspired image. It can imitate a style. It can create something that looks beautiful on the surface. But it cannot recreate the exact brushstrokes of an acrylic painting made by hand. It cannot recreate the same watercolor technique I personally used while making a piece. It cannot fully replicate the pressure of the hand, the accident of the paint, the emotional state of the artist, or the lived moment behind the work.
That is where humans need to start placing value again. I use my paintings and artworks as the foundation for designs on everyday products, giving more meaning and emotional value to objects that may otherwise feel ordinary. The intent is to take something mundane and give it a more personal, artistic life. These designs are not easily available everywhere, they are sold through creatively curated resources.
To me, this is not about rejecting technology. It is about deciding what deserves value in a world where almost anything can be generated.
That is also why Elite Styles Fashion matters to me. It brings focus back to Indian craftsmanship, handiwork, and the beauty of Indian fusion and Western wear. A machine-made garment may be fast, consistent, and affordable, but it will not always carry the same fit, detail, design sensitivity, or quality of handwork that comes from a carefully crafted piece.
Even though AI is becoming more powerful, and even though we need to adapt to this changing environment, the power still ultimately lies with us as humans. We decide what we want to value.
Do we value a cheap, automated, prebuilt, system-generated AI response, or do we value something that took time, skill, care, preciousness, one-of-a-kind quality, and human touch?
A carefully handcrafted pottery piece, once broken, can never be replaced in the same way again. The same is true for hand-embroidered Indian fusion dress designs, once damaged, they cannot be reproduced in the same way. That is what makes them valuable. Their value is not only in the object, but in the hand, the process, the imperfection, and the story behind it.
I believe the handmade will not disappear because of AI. I believe it may become more meaningful because of AI. And it is really up to us to make that decision. As technology becomes more artificial, human intention may become more valuable.
What you can start doing today
Choose one area of your work where AI can reduce friction. Use it to organize scattered information, compare options, summarize patterns, or help you see where decisions are getting stuck. Then choose one area where you will intentionally keep human judgment in control. That balance matters.
The future will not belong only to people who know how to use AI. It will belong to people who know how to use AI with judgment, care, and responsibility.
Adaptability is the real backup plan
AI may not take every job, but it will change the value of many jobs. It will change how we work, what we learn, how we create, and how we define expertise. But it will also reveal the value of human judgment, taste, empathy, creativity, courage, and care.
The real backup plan is not to fear AI or blindly worship it. The real backup plan is to stay awake, stay curious, and stay adaptable enough to learn what is changing while protecting what makes us human.
Adaptability is not panic. It is preparation. It is the decision to learn before we are forced to. It is the ability to question what technology gives us, use what helps us, and keep building a future where human imagination still matters.
That is the mantra I keep coming back to in every season of my life and work. Accept it. Learn it. Use it. And most importantly, adapt.
If you are exploring how AI, UX, product thinking, and human-centered strategy can come together in your business, I invite you to connect with me and continue the conversation.
Read more from Smrita Jain
Smrita Jain, Creative and Experience Director
Smrita Jain is an award-winning Creative and Experience Director, Senior Product Designer, Agentic AI Consultant, and founder of The Aquario Group, a Brooklyn-based design and experience consultancy focused on strategy, UX, product design, brand storytelling, digital transformation, and business growth. With more than 18 years of multidisciplinary creative experience, including 7+ years in UX/UI and product design and 1+ year in agentic AI-orchestrated workflows, Smrita has built her career at the intersection of design, technology, storytelling, and measurable business impact.










