hi, i'm madiha hanifa.
i grew up in ranchi, did my schooling there, and then left for IIT kharagpur to study mechanical engineering with a micro-specialization in entrepreneurship & innovation and ai, because apparently one degree wasn't enough to contain the curiosity.
i've never really had a straight line. at kharagpur i was doing nlp research while most people were learning how to run finite element simulations. i went from robotics labs at the university of birmingham to management consulting at bcg and then accenture, where i got exposed to the most significant digital transformations of the century and helped enterprises actually adopt ai.
the common thread? i've always been drawn to the intersection of ideas, data, and decision-making — and i've always wanted to understand how things actually work, not just that they do.
right now, i'm on the investments team at Capria Ventures and honestly, it's everything i hoped it would be. i spend my days learning about what's being built, talking to founders, digging into markets, and occasionally demolishing everyone at a pub quiz on a random evening. the energy in this world is addictive.
outside of work, i'm endlessly curious about the world — new sports/games, behavioral economics, exquisite cuisines, political philosophy and talking to new people. i'm always learning, always mid-thought on something.
want to know more?
when i have too much on my mind, i write it down.
this is where i put the things i'm thinking through — observations about ai, life, frameworks i'm building, and whatever else is taking up my headspace. some of it is polished. some of it is half-formed. all of it is honest.
i believe good thinking deserves to be shared, even when it's not finished. especially then.
read something that sparks a thought? find me on linkedin or medium.
i build things when i can't stop thinking about a problem.
most of what i end up making lives at the intersection of language, data, and decision-making — which is also just how my brain works. some of it started as personal frustration with existing tools. some of it is experiment. some is very much in progress.
here's what's been keeping the late nights interesting:
whether you're building something interesting, want to talk ai & venture, or just want to say hi — i'm always happy to make time for a good conversation.

Kodak invented the digital camera and still went bankrupt. The consulting giants are staring at the same fork in the road.
1975 isn't a year you associate with selfies. But that's when Steve Sasson, a Kodak engineer, built the world's first digital camera. It was clunky, shot in black and white, and stored images on a cassette tape. It was also, by any measure, the future.
Kodak's executives buried it. A digital camera would eat into film sales, their $10 billion cash cow. Why cannibalise yourself when the money is still rolling in? Two decades later, Sony, Canon, and Nikon owned the digital photography market. Kodak filed for bankruptcy in 2012. The technology that could have saved the company was killed by the people running it.
That exact playbook is now repeating in enterprise software. Except the stakes are roughly 50 times larger.
SAP's legacy ERP system, called ECC, is the operational backbone of over 35,000 companies globally — 60% of the Fortune 500 among them. It runs their supply chains, finance, procurement, HR. EVERYTHING.
SAP has set a hard end-of-support deadline: December 2027, with extended maintenance stretching to 2030. After that, no security patches, no compliance updates, no fixes. Every one of those companies needs to migrate to S/4HANA, SAP's next-generation platform.
This is not a software upgrade you knock out over a weekend. A typical migration for a large enterprise costs $100–250M, spans 2.5 to 5 years, involves 10,000+ test cases and 30TB+ of data. Only 8–10% of these projects finish on time. Sixty-five percent report major quality issues. And Gartner projects that nearly half of all ECC customers still won't have migrated by 2027.
The consulting rates? Expected to spike 30–50% as the deadline closes in and the talent pool dries up. Demand for S/4HANA specialists could hit three times the available supply by next year.
Companies migrating their ERPs don't actually know if their data is correct. They don't know if their business processes are correctly implemented in the new system. And the way they find out today is almost entirely manual.
System integrators like Accentures, Deloittes, EYs deploy armies of consultants to write test cases by hand, validate business rules through front-end clicks, and run spreadsheet-based data checks. This work alone consumes 30%+ of project timelines and budgets. And even then, coverage is incomplete. Edge cases slip through.
The incumbents know AI can automate huge chunks of this. They have the data. They have the client relationships. They have the domain expertise. And yet — automating assurance work would cannibalise their own billable hours, the same billable hours that fund their quarterly earnings. A partner at a Big Four firm doesn't get promoted for shrinking headcount on a $200M engagement. They get promoted for growing it.
This is the Kodak trap, but in a suit. Protect today's revenue. Delay tomorrow's disruption. Hope someone else blinks first.
The biggest opportunities in enterprise AI right now are not in building another chatbot or another copilot for developers. They are in the unsexy, high-stakes operational plumbing that trillion-dollar companies depend on — migration assurance, data validation, process integrity, retrofit analysis — where manual work still dominates because the incumbents are too incentivised to keep it that way.
The emerging model: AI agents that ingest a company's business documents, flowcharts, code, and data definitions. They map how data flows across workflows, generate validation scenarios that used to take consultants weeks, run those checks continuously against live systems, and flag inconsistencies before they cause damage.
The market math backs it up. Business assurance for SAP S/4 migrations alone is an $18 billion opportunity. Expand to all ERP systems and it's $35 billion. Zoom out to AI-led IT services broadly and you're looking at $500 billion to $1 trillion by 2030.
Accenture alone spends over $1 billion annually on AI and has publicly committed to training its entire workforce on generative AI tools. Deloitte, EY, and TCS are all building internal AI accelerators for SAP migrations. If even one of the Big Four decides to aggressively productise its validation workflows to cannibalise its own services revenue — the startup window narrows fast.
The bulls would argue that incumbents have had a decade to make this shift and haven't. That the incentive structure is too broken, the org charts too bloated, the partner economics too entrenched. But underestimating a trillion-dollar services industry that has survived every technology wave since mainframes would be its own kind of corporate myopia.
The 2027 deadline is not moving. Nearly half of SAP's customer base is still dragging its feet. The migration wave is guaranteed. The only open question is who captures the value — the lumbering giants who see the disruption coming but can't bring themselves to act, or the new entrants fast enough to build trust before the window shuts.
Kodak had 37 years between inventing the digital camera and filing for bankruptcy. The enterprise AI window is measured in months, not decades.
The company that invents the future doesn't always get to live in it. But the one that ships it fast enough just might.
failure is a feedback loop and not an identity
why is sleeping the most difficult thing, do we keep associating sleep with inactivity, especially when the world is moving like a tornado or our models are just not working right, filled with anxiety and hallucinations? anyway so welcome to another late-night patch update ig
life keeps throwing me curveballs, and my first instinct is to feel broken. but what if it's simpler: what if failure is just data, not a death sentence?
we all crave a perfect career blueprint. social media sells us this myth of meteoric success or spectacular downfall. somewhere along the life someone handed us a checklist: grind, isolate, repeat. the checklist got heavier and we started measuring ourselves with it. when things worked out along the way, we kept increasing the weights associated with these metrics — but the quiet truth is, we never were tested with real-world data, and so the model just worked. but maybe we've been modeled all wrong.
society trained us like a piece of code with the wrong weights. and now we are all struggling to make that model work in the real world.
what if every glitch is just a chance to fine-tune, and isn't that better than trying to fit your model forcefully where it just doesn't?
so now we know what to do (hint: finetune) it still happens, you ask?
— coz we don't have the luxury of pure logic in life all the time. so we gotta unlearn, reduce the importance of these so-called checklists and start believing in our everyday improved models of ourselves.
but is it a bane all the time?
— definitely not. people who take things personally are much more motivated, people who have a chip on their shoulder are much more hungry for the Ws. find what in your broken model actually wins for you and press it, iterate around the edges, not through the middle.
the idea is to find out when it's working for you and when it's not. maybe this is the whole point of being human: you can only optimize so much, but you can care forever. if it works for us, exploit it, if it doesn't, fine-tune!
at least now we know, the only way out of a loop is to loop smarter. so debug without shame :)
btw sleeping isn't the most difficult thing, it's the tiny rebellion of typing in all lower case — constantly backspacing all my accidental shift and ignoring the grammarly suggestions to correct the same, and it's a deliberate misweight :p
what is fine today will be better tomorrow.
AI is no longer just a tool — it's becoming an independent operator. we've moved from AI-assisted decisions to AI-driven execution, and the pace is accelerating. this shift isn't just about better models but real-world adoption, from LLMs adapting to new patterns to dynamically adapting based on reasoning and planning.
think about it — just a decade ago, AI was mostly about automation. today, it's about autonomy. the difference? automation speeds up tasks. autonomy replaces human intervention. that's where agentic AI comes in. these AI agents aren't just processing information — they're making real-time business decisions, executing tasks, and optimizing workflows without waiting for human input.
we saw this same pattern with SaaS, cloud computing, and mobile internet. every transformative technology starts as a niche product and then scales until it becomes the default way of operating.
right now, AI in enterprises is fragmented. companies are consolidating fragmented AI use cases into end-to-end workflow automation. companies like Maven AGI, Wordware, and Decagon are already building AI agents tailored for customer support, enterprise solutions, and industry-specific tasks.
AI across 4 major service functions: customer insights and growth (personalization and real-time trend prediction), workforce enablement (AI-human collaboration for efficiency), omnichannel service (seamless transitions across chat, email, voice), strategic service (filling the gap between business goals and customer needs).
as businesses realize that point solutions have a ceiling, they're moving toward AI-native architectures — where agentic AI runs the entire workflow. case in point: Klarna has transitioned from Salesforce to developing its own AI-native systems.
having worked with structured enterprise data at large firms and fast-moving, unstructured data in startups, i've seen both sides. the gap between data collection → insight generation → decision execution is closing rapidly.
what's automatable today:
AI is moving toward hyper-personalization — deeply customized for specific industries. the future isn't one AI doing everything — it's AI tailored for every function, deeply trained on industry-specific data.
at the core of most AI agents today are LLMs, but they come with limits — memory constraints, reasoning gaps, and a reliance on training data. future advancements in model architecture could complement agentic AI, unlocking new capabilities.
next step? invest in architectures that support these systems, use real-time analytics for sharper automation, and find the right balance between AI autonomy and human oversight to build workflows that don't just work, but evolve.
the next five years won't be about AI assisting businesses — it'll be about AI running them. the question isn't whether companies will adopt agentic AI — it's how quickly they can before their competitors do.
hieeeee, fellow wanderers of the digital realm (wicked laughter)
i rarely plug my blog, but if you've somehow stumbled upon it amidst the chaos of corporate capitalism, WELCOMEE.
today, let's delve into the intricate dance of anxiety and explore my way of taming the crazy with some unconventional (unhealthy) ways that have helped me navigate its murky waters. let's try to make sense of it with raw thoughts and messy solutions (no i didn't call it a problem! hey!)
sometimes we cry but today we didn't (wink, wink).
ever feel like you're stuck in an endless loop of life's mundane tasks, anxiously contemplating, "what my life would be if i did things the other way round?" pause. take a moment to reboot your brain. whether it's making your bed or pondering the grander schemes of life, give yourself the chance to break the cycle. just BREAKFREEE! i tend to go from insomnia to sleeping all day to avoid facing the reality that it starts all over again tomorrow. break the cycle! do one thing new and be proud of it!
i'm a danger zone when both hungry and tired. let's add some dietary spice to anxiety coping — don't stuff your face when stressed and avoid being a food hermit when you're riding high on the energy waves. find that balance on the dietary tightrope; your mind will thank you.
and as you hear, "tHe GrInD sHoUlD nEvEr StOp", "iMpOsTeR", yada yada echoing ceaselessly. STOP. take a moment to see life in its unfiltered glory. gain perspective by reading and understanding that the messy jumble of emotions and thoughts is a shared human experience. it's everyone's messy feelings, yayy!
silence the internal brain babbling monologue. halt the internal murmuring and word jumbling. instead of replaying conversations in your head for the billionth time, put pen to paper (or fingers to keyboard). transform the chaos into words. you'll find that when written down, problems seem less towering, and you, surprisingly, feel less superior when it comes to problems. huh! it's a magical act of liberation from the burdensome chatter within.
BREAKFREEE from the rigid classifications of permanent or temporary, big or small, win or lose. let's challenge the idea that everything fits into neat little boxes. revel in the messiness, the in-between shades.
to sum it up, this ain't a one-size-fits-all fix. it's a chaotic journey. it's a wild ride. here's to navigating anxiety — one laid-back step at a time :)
cheers to the messiness of being human.
REMINDER: breatheee
enter the sacred chronicles where fortunes are woven, and destinies are forged — the mystic realm of mutual funds. prepare for a voyage through the esoteric lexicon of financial sorcery, as we unravel the enigmas of investment spells. from deciphering the mystical compass of NAV to harnessing the alchemy of asset allocation, join us on an exhilarating expedition into the heart of wealth creation.
mutual funds are not just gateways to wealth; they are portals to a magical odyssey of growth and empowerment. let the universe of mutual funds be your enchanted canvas, where prosperity and financial fulfillment flourish.
so there it is — my hogwarts-themed crash course on mutual fund jargons before diving into self-managed investing. hope this sheds light for those looking to navigate this realm without a fund manager. here's to mastering the art of mutual fund investments independently!
document or text summarization is one of the most common tasks in Natural Language Processing. with the amount of new content generated by billions of people and the amount of pre-existing data available, we are inundated with an increasing amount of data every day. humans can only consume a finite amount of information. in this project, we discuss and implement two main types of summarization: extractive and abstractive summarization. we also created a simple web application that implements 3 different methods of extractive text summarization.
extractive summarization identifies the important sentences or phrases from the original text and extracts only those. it takes the text, ranks all the sentences according to the understanding and relevance of the text, and presents you with the most important sentences. this method does not create new words or phrases — it just takes the already existing words and presents only those.
we implemented four extractive summarization methods:
abstractive summarization tries to guess the meaning of the whole text and presents that meaning. it creates words and phrases, puts them together in a meaningful way, and adds the most important facts found in the text. abstractive techniques are more complex than extractive techniques and are also computationally more expensive.
it was not until the development of techniques like seq2seq learning and unsupervised language models (e.g., ELMo and BERT) that abstractive summarization became more feasible.
google's PEGASUS further improved state-of-the-art results for abstractive summarization, particularly with low resources. unlike previous models, PEGASUS enables close to SOTA results with just 1,000 examples, rather than tens of thousands of training data.
PEGASUS uses an encoder-decoder model for sequence-to-sequence learning. the encoder takes into consideration the context of the whole input text and encodes it into a context vector (a numerical representation). this is then fed to the decoder which produces the summary.
GSG (Gap Sentences Generation) pretraining was done on datasets such as XSum, CNN/Daily Mail, Newsroom, and Wikihow. PEGASUS masks whole sentences instead of small spans of text — and the pretrained model can then be fine-tuned on much smaller datasets.
natural language processing has various applications and automatic text summarization is one of the popular and great techniques. the focus is shifting from extractive summarization to abstractive summarization. the abstractive summary technique generates relevant, exact, content-full, and less repetitive summaries. since the abstractive summarization technique focuses on generating a summary that is nearer to human intelligence, it is a challenging field.
it is widely known that growing environmental concerns and shifting government policy have pushed companies to think in the direction of sustainability when it comes to managing an effective supply chain strategy. the conventional supply chain practices currently in place account for a lot of material use — producing materials, using them, and discarding them into landfills, rivers, islands, and more.
we presently live in a non-sustainable "Take-Make-Waste" paradigm based on a linear economic model, which causes many environmental problems that will eventually reach a sustainability dead-end as earth's resources become overloaded. this obsolete model will be replaced with a circular economy: an industrial system that is restorative or regenerative by intention and design.
in brief, a circular economy is a novel economic model in which the focus is to keep materials in use for as long as possible and also to preserve — or even upgrade — their value through services and smart solutions.
the concept known as the Circular Business Model involves constructing a supply chain where a considerable fraction of materials are recycled and reused while all the stakeholders of the supply chain receive a net positive out of the process.
one example that portrays a sector where circularity is maintained is the manufacturing of aluminium cans, which are recycled at a high rate. used cans are collected, cleaned, and reproduced into fresh cans. the energy savings retrieved from recycling aluminium and maintaining the circularity proves to be much more cost-efficient than extracting aluminium out of bauxite.
in a circular economy, closed loops consist of two supply chains: a forward and a reverse chain. in a reverse chain, a recovered product re-enters the forward chain. possibilities open up for businesses that provide solutions and services along the reverse cycle.
in the majority of human supply chains, product parts have proliferated and production has been centralized in order to achieve two critical goals: performance via specialization of parts, and economic efficiency via economies of scale.
as most supply chains have optimized for these goals, adopting circular business models is prohibitively expensive, certainly in the immediate future. to recycle and remanufacture products or components, collection systems would have to stretch over vast distances. because of parts specialization, it is very difficult to amass enough volumes of the parts to make recycling worthwhile.
without a doubt, the described circular business models provide huge opportunities for companies, customers, and the environment. however, these benefits alone will not translate into widespread acceptance. consumer preferences are followed by businesses, and history suggests that most consumers are unwilling to sacrifice performance for environmental sustainability at this time.
success or failure with circularity will continue to depend heavily on the receptivity of top leaders, their commitment to sustainable business values, and the willingness of managers at every organizational level to change and adapt.
an analysis of the effectiveness of various technical indicators.
technical analysis is the practice of using historical price and volume data to forecast future price movements. the debate around it has been running for decades: does it actually work, or are traders just finding patterns in noise?
this article attempts an honest analysis. not the enthusiastic "chart reading is the holy grail" take you'll find on trading forums, and not the dismissive "it's pure gambling" take from pure fundamentalists. the answer, as with most things, lives somewhere in between — and it depends heavily on context.
the core premise: markets are driven by human psychology, and psychological patterns repeat. price charts are a record of collective human decision-making. technical indicators attempt to quantify those patterns.
the article examines several key indicators: moving averages (smoothing out price action to identify trend direction), RSI (Relative Strength Index) (measuring momentum — is a stock overbought or oversold?), MACD (Moving Average Convergence Divergence) (tracking trend changes and momentum), Bollinger Bands (measuring volatility and identifying breakouts), and volume analysis (confirming the strength behind price moves).
technical analysis tends to work better in highly liquid markets where many participants are watching the same charts — because then it becomes partially self-fulfilling. if enough traders believe a support level matters, it will matter because they'll all act on it simultaneously.
it breaks down in thinly traded markets, during macro shocks (news events that override price patterns entirely), and when used as a sole decision-making framework without any fundamental context.
the uncomfortable truth: the effectiveness of any technical indicator erodes the moment it becomes widely known and used. edge, by definition, is rare.
technical analysis is a lens, not a crystal ball. it can improve the timing of entries and exits for traders who already have a directional view. it can identify when a market is stretched and due for a correction. but it cannot replace an understanding of what you're investing in or why.
intelligent investing uses technical analysis as one tool in a larger toolkit — not as the entire toolkit.
a cooperative game is a competition between players having a possibility of external enforcement of cooperative behavior. unlike non-cooperative games, in cooperative games there is a possibility of forming alliances or making a pact where players can share payoffs and coordinate their strategies.
it is a game between coalitions of players rather than between individuals, and it questions how groups form and how they allocate the payoff among players. the framework used to analyze such games is known as cooperative game theory — one of the branches of game theory that includes core and Shapley value.
the central question: given the sets of feasible payoffs for each coalition, what payoff will be awarded to each player?
five people A, B, C, D, and E decided to combine forces to start a business. after careful analysis, they conclude they can achieve a yearly profit of 100. assigning 20 to each person seems reasonable. however, D & E figured out that if just the two of them work together, they can make 45 — more than the 40 they would receive from the equal allocation. A, B and C also realized they can make 25 together. so it's in their interest to keep D and E in the group.
they decide to give 46 to D and E and divide the remaining 54 equally among the three of them. but then C, D and E find they can make 70 together — more than the 64 (46 + 18) the second allocation gives them. this cascading instability illustrates why cooperative games need a more systematic framework.
let N = {1,…,n} be a finite set of players. each non-empty subset of N is called a coalition. the set N is referred to as the grand coalition. a cooperative game in characteristic function form is an ordered pair (N, ν) where ν is a function that assigns to every coalition S its worth/payoff ν(S).
transferable utility (TU) means that utility can be transferred from one player to another without incurring loss. such transfers are possible if the players have a common utility valued equally by all. in a TU game, the payoffs aren't given for individual players but for coalitions — irrespective of how the coalitional payoff is divided, members of the coalition enjoy the same total utility.
two beverage companies X Ltd and Y Ltd are planning to launch a new beverage based on a flavor that must be imported from a foreign company. government restricts the import quantity. rather than competing separately, they form a coalition — X Ltd and Y Ltd enter an agreement where both import the same quantity but in different quarters to ensure the quota restriction is met. the coalition forms and payoffs are allocated in a way that fulfills both their objectives while respecting the external constraint.
the main assumption of cooperative games is that the grand coalition N will form — i.e., cooperation is successful. the task is then to divide the payoff ν(N) among players in a fair way. two key solution concepts:
cooperative game theory provides the mathematical tools to find stable, fair allocations in situations where collaboration is possible and beneficial — from airline alliances to political coalition-building to supply chain partnerships.