Yes, we’re being pulled away — from essential topics like accessibility, which are often overshadowed by technological distractions. A 2024 study by WebAIM analyzed the top 1 million homepages worldwide and found that 96.3% had detectable accessibility issues, with an average of 56 errors per page — highlighting how far digital accessibility still is from being a priority.
A friend working on an MVP for a GPT-integrated search engine told me that, initially, the project leads asked to scale back testing due to the high cost of tokens. Soon after, they suggested standardizing responses to further reduce token consumption. Unbelievable! They ended up recreating the same old chatbot that traps users in an endless loop of generic responses, offering no real solution.
This was the motivation behind writing this article.
It’s better to be prepared to think critically and strategically rather than just feed another empty case study on LinkedIn. I want to introduce practical and accessible exercises, especially for those who have never written a single line of code. This is not about deep technical expertise but about giving designers a starting point to better navigate AI-driven projects. The goal is to help them interpret technical discussions, challenge trends, and identify real solutions — instead of slapping the word GPT onto a product as if it were a silver bullet.
This doesn’t mean AI isn’t important — it just highlights a misalignment of priorities. The real commitment should be to the people using our products and solving actual problems, not blindly chasing the latest tech trend.
Many companies don’t actually want to solve problems with AI — they just want enough of an impact for presentations, reports, and LinkedIn case studies.
It’s always the same story:
- Decision-makers return dazzled from industry events, impressed by flashy AI demos.
- But they have no concrete plan for implementation.
- Specialists are pulled from their strategic projects just to validate exaggerated expectations.
- Grand roadmaps are created that keep corporate narratives spinning without ever moving forward.
How much have these AI enthusiasts actually invested in machine learning over the years?
Without a real foundation, there is no magic solution.
“If AI only made it onto your roadmap because your boss got excited at an event and wants something like the competition, chances are your product doesn’t actually need AI.”
Some companies do it right. Netflix and Spotify have years of experience using machine learning to optimize recommendations and personalize user experiences. In healthcare, solutions like NeoMed’s Kardia accelerate heart attack diagnosis, while others apply AI to clinical data analysis, assisted diagnosis, and treatment discovery.
These examples prove that AI adds real value when built on strong data foundations and clear objectives. Without those, AI is just another buzzword disguised as innovation.
AI adoption directly affects user experience. One of the most common pitfalls is the chatbot trap — companies rush to implement AI to cut costs, but the end result frustrates users instead.
Users get stuck in an endless cycle of “I didn’t understand” responses, leading them to abandon the chatbot entirely. Instead of helping, these AI implementations create more barriers than solutions.
Another example: Recommendation algorithms.
- Some AI models overdo personalization, making it harder for users to discover new content.
- The lack of UX-driven AI planning leads to experiences that feel monotonous and predictable.
AI that fails UX is AI that loses user trust.
Friends working on GPT-based projects have shared some recurring issues:
1. Inconsistent tone of voice
Problem: GPT doesn’t match the brand’s identity, flipping between formal and casual tones.
Impact: Requires constant prompt engineering to maintain consistency.
2. Biases in data and responses
Problem: GPT is trained on internet text, leading to biased and misleading outputs.
Impact: Reinforces stereotypes, excludes groups, and spreads inaccurate information.
3. High token costs due to redundancy
Problem: GPT lacks continuous memory, forcing excessively long prompts for context.
Impact: AI-driven products become expensive to operate at scale.
Beyond that, major consultancies and system integrators often push GPT-based solutions for convenience, without considering better alternatives. Many decisions are made not based on technical feasibility but on how easy it is to sell a big-name AI brand.
The cost of keeping a system running on expensive tokens could be better invested in infrastructure and custom open-source models — but those options don’t generate the same revenue for vendors selling GPT integrations.
Tools like ChatGPT, DeepSeek, and others have made AI mainstream — and that’s exciting. But LLMs are not always the best choice for MVPs and scalable products.
They’re powerful, but not universal solutions.
Deploying LLMs without critical thinking leads to high costs, slow responses, and products that don’t actually solve real problems.
“The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.” — Isaac Asimov
AI didn’t appear overnight. It’s the result of decades of research, refined algorithms, and increased computing power.
In Uncle Tungsten, Dr. Oliver Sacks describes his childhood fascination with the elements of chemistry, intertwining it with the history of tungsten filament lamp evolution. That book helped me understand that scientific discovery precedes commercial application.
We must question the logic of prioritizing technology before defining real problems.
Concepts like the Metaverse, NFTs, Gamification, and Augmented Reality had real potential but were reduced to superficial trends by misdirected hype. AI may be heading down the same path, with companies adding “AI” without a clear understanding of the problem they are trying to solve.
1. Metaverse
A digital universe with potential for education, remote work, and entertainment.
Where it went wrong:
- Meta’s hype focused on awkward avatars and virtual land speculation.
- Low accessibility (expensive VR headsets, poor interfaces).
2. NFTs
Digital ownership authentication, useful for art and gaming.
Where it went wrong:
- Became speculation and pump-and-dump schemes.
- Overflow of meaningless collections.
3. Chatbots and Virtual Assistants
Automated customer service and intelligent assistants.
Where it went wrong:
- Limited bots that just say “I didn’t understand.”
- Companies trying to replace humans instead of improving experiences.
4. Augmented Reality (AR)
Digital interactions overlaid on the real world, useful for education and gaming.
Where it went wrong:
- Hype around revolutionary AR glasses that never arrived.
- Useless apps (like QR codes that just open PDFs).
5. Gamification
Using game mechanics to boost engagement in education and productivity.
Where it went wrong:
- Companies only implementing point systems without real engagement.
- Apps forcing meaningless competition.
Hype, however, isn’t entirely negative. Without it, we might not have progressed as quickly in applying AI to medicine and automation. The key is to channel this energy not into a hollow race for innovation but as a catalyst for real and valuable discoveries.
Everything comes down to the quality of the question, not the certainty of the answer. The real issue is not whether we should adopt AI, but why we are doing it. Enthusiasm can lead to adoption without critical reflection, resulting in superficial products with no real purpose.
In practice, what truly makes a difference is understanding the fundamentals — distinguishing Machine Learning, LLMs, and Deep Learning, rather than simply trying to “add AI” to everything. It’s equally important to explore alternative approaches, such as:
- Traditional neural networks vs. deep neural networks — Not every AI system needs deep learning; sometimes, simpler architectures are more efficient.
- Generative models vs. predictive models — While models like GPT generate text, others excel at forecasting trends and identifying patterns.
- Supervised, unsupervised, and reinforcement learning — Understanding how a model learns directly impacts its application.
- Rule-based systems vs. AI-driven learning — In many cases, well-defined rules outperform a machine learning model.
- The role of embeddings and vectorization — Techniques like word embeddings and semantic embeddings enhance search accuracy and recommendation quality.
- Specialized vs. general-purpose models — Some tasks are better handled by small, highly trained models rather than a large, costly LLM.
In short, before rushing to integrate AI into a product, it’s crucial to recognize that each model serves a distinct purpose. Choosing the wrong approach can lead to wasted time, excessive costs, and disappointing outcomes.
This is not a tutorial, and this article is aimed at those who have never written a single line of code. I chose Google Colab because it provides an accessible, ready-to-use environment without the need for complex setup. Since it runs in the cloud, anyone can execute the examples regardless of their available hardware. Additionally, its integration with popular libraries makes experimentation and prototyping fast and collaborative.
If you’ve never used Google Colab before, follow these steps:
- Go to Google Colab and sign in with your Google account.
- Click on “New notebook” to create a new file.
- Copy and paste the example code into a cell in the notebook.
- Click the ▶️ (Play) button next to the cell to run the code.
- A text input field will appear below the executed code, allowing you to interact with the example.
Now you can run AI, Machine Learning, and Deep Learning examples directly in your browser without installing anything. But hold on… running a Colab notebook doesn’t make you a programmer! Appreciate the convenience of modern technology, but don’t walk away claiming you “understand programming” just because you executed a prewritten script. Be honest — running code is not the same as understanding how it works.
Below, we have three examples. Exploring generic AI, Machine Learning, and Deep Learning in this order provides a structured and progressive understanding:
- Rule-based models (Generic AI) serve as an initial step, demonstrating automation logic without learning capabilities. They help clarify the limitations of systems that strictly follow predefined instructions.
- Machine Learning (ML) introduces the ability to recognize patterns and learn from data, allowing models to make predictions without being explicitly programmed with rules.
- Deep Learning (DL) takes this concept even further by using deep neural networks to recognize complex patterns without relying on fixed rules.
To illustrate this, I’ve chosen a classic example: MNIST, a dataset of handwritten digits (0–9). In this case, a deep learning model learns to recognize numbers by analyzing thousands of examples — without anyone having to program specific rules for each handwriting variation. This process showcases how neural networks can extract subtle patterns from data and make highly accurate predictions.
Website navigation assistant
Problem:
Many websites have scattered information and complex menus, making it difficult for users to find what they are looking for.
Solution:
A simple rule-based chatbot can act as a navigation assistant, helping users quickly locate information.
It can answer questions such as:
- “Where can I find the products section?”
- “How do I access my order history?”
- “How can I talk to a representative?”
This model doesn’t require advanced AI, making it an efficient solution for improving navigation without redesigning an entire website.
Benefits:
- Enhances content discovery
- Reduces frustration and abandonment rates
How to test?
- Click the ▶️ button in the code cell to run the chatbot.
- Type ‘exit’ to close the chatbot.
- Try asking:
- “Where can I find the products section?”
- “Can I change my delivery address?”
Observe the chatbot’s responses.
import difflib# Simple Rule-Based Chatbot (Website Navigation Assistant)
def chatbot(question):
responses = {
"where can I find the products section": "You can find the products using the search bar or by accessing the main menu at the top.",
"how do I access my order history": "Access your order history by clicking on your account and then on 'My Orders'.",
"how can I talk to a representative": "To speak with a representative, click here: [support link]",
"what are your operating hours": "Our customer service is available Monday to Friday, from 9 AM to 6 PM.",
"can I change my delivery address": "Yes, you can change your address in 'Account Settings' in the main menu."
}
question = question.lower().strip("? ")
# Find the closest match to the user's question
closest_match = difflib.get_close_matches(question, responses.keys(), n=1, cutoff=0.6)
if closest_match:
return responses[closest_match[0]]
else:
return "Sorry, I didn't understand. You can try rephrasing your question or visit our FAQ here: [FAQ link]"
# Testing in Colab
while True:
question = input("Ask a question (or type 'exit' to close): ")
if question.lower() == "exit":
print("Chatbot: Goodbye!")
break
response = chatbot(question)
print(f"Chatbot: {response}")
Solution type: Rule-based (Automated FAQ).
Simple, direct, and efficient for specific and well-defined scenarios.
Is it AI? Yes.
Because it simulates an automated conversation using predefined responses.
Is it ML? No.
Because it cannot learn from new data or interactions.
Is it Deep Learning? No.
Because it does not use deep neural networks or advanced learning algorithms.