The Leadership Shortage Nobody Is Planning For
How AI is reshaping entry-level jobs, leadership pipelines, and the future of work
Reading time: ~25 minutes
Prefer to read this offline? Download the PDF.
AI is quietly reshaping the conditions under which future leaders get built. The changes are small enough per year to be invisible and structural enough across a decade to produce a leadership shortage nobody is planning for. The earliest signals are already visible. The full pattern will take years to mature.
This is also a paper for current leaders. The decisions being made today, in rooms current leaders are already in, are what determines whether the pipeline holds or thins.
This is the full picture in one place. What AI is costing in dimensions that don’t show up on anyone’s books yet, why the market won’t correct for it on its own, who it hits first and hardest, and what leaders can actually do about it.
✦ ✦ ✦
This concept is explored in five parts.
Part I. AI is quietly thinning entry-level hiring across the white-collar economy. A one or two percent thinning per year, compounded across a decade, produces a senior talent shortage 10 years from now that nobody will have seen coming.
Part II. Not every AI automation is the same. Some automated tasks were never teaching anyone anything. Others were how junior professionals built the judgment that senior work requires. The efficiency gains show up on the books. The learning value erosion doesn’t.
Part III. The market won’t self-correct because the cost of a thinner senior pipeline in 10 years doesn’t sit on any one company’s books. Every individual decision makes rational sense. The aggregate outcome is still a senior talent shortage no single company is positioned to prevent. It’s a textbook externality.
Part IV. Inside the aggregate is a sharper pattern hitting women first. Two structural problems are compounding at the same time. The broken rung was there before AI. AI is now automating the roles held disproportionately by women, and women are getting less manager support to develop AI skills.
Part V. There’s no clean way to measure learning value erosion in real time. This is a mindset, not a metric. What follows is a set of questions to ask, actions to take, and patterns to pay attention to, so that when AI implementation decisions get made, this conversation comes with them.
✦ ✦ ✦
Part I: The Quiet Making of a Leadership Shortage
Something is happening in the white-collar economy that isn’t making the news yet. Not a dramatic layoff wave. Not a splashy announcement. Something quieter, and because it’s quiet, it’s easy to miss.
Entry-level hiring is thinning.
Not by much. One or two percent a year. The kind of number that gets lost in an earnings call footnote. The kind of change that individual managers explain away with “we’re being more selective this year” or “AI tools are making the team more efficient.” Both of those things are true. They just don’t tell the whole story.
Here’s what the data suggests. A Stanford paper by Brynjolfsson and colleagues, using payroll data from millions of workers, found that workers aged 22–25 in occupations most exposed to generative AI saw about a 13% relative decline in employment since the widespread adoption of large language models. The same research found minimal changes for experienced workers in the same occupations, and essentially no change for workers whose jobs were less exposed to AI. The pattern is specific. Young people in AI-exposed roles are being hired less, while everyone else is roughly holding steady.
McKinsey’s most recent State of AI survey, published November 2025, found that 88% of organizations now regularly use AI in at least one business function. Three-quarters of respondents report that AI is now delivering EBIT gains in at least one area. Adoption has moved from experimental to embedded. And the tasks AI is best at (drafting, summarizing, data entry, research synthesis) are exactly the tasks that historically made up the first year or two of a white-collar career.

Here’s the feedback loop that makes this quietly dangerous.
Every junior hire who doesn’t get hired is a person who doesn’t get trained. Every person who doesn’t get trained is a person who doesn’t build the judgment that comes from doing the work. Every person who doesn’t build the judgment is a person who can’t teach the next cohort what good looks like. If AI is doing, a human is not accumulating.
The math on this isn’t loud in any single quarter. It’s loud in a decade.
The benefits show up this quarter. The costs show up in 10 years.
That’s the part worth noticing. Not the AI adoption itself. Not even the entry-level hiring dip on its own. The quiet making of a senior leadership shortage that won’t be obvious until the people who could have filled those roles aren’t there.
The research is out there. The pattern is visible in the data. What’s missing is the conversation about what to do about it before the shortage being built becomes the shortage that can’t be closed. AI handles many things well. Senior judgment is not one of them.
Part II: The Learning Value Trade-Off
The quiet thinning of entry-level hiring produces two effects at different timescales: a small, invisible efficiency gain this quarter, and a large, structural leadership shortage 10 years from now.
What exactly is being lost, and why does it matter more than the efficiency conversation is giving it credit for?
Two kinds of entry-level work
Think about the tasks that used to fill the first two years of a white-collar career. Drafting a first-pass memo. Pulling together a research summary. Cleaning up a dataset for someone more senior to analyze. Sitting in on a call, taking notes, producing a readout. Scheduling, formatting, copyediting, running the numbers.
Historically, those tasks did two things at once. They produced output for the company, and they produced learning for the person. A junior analyst who drafted 30 memos didn’t just generate 30 deliverables. She learned what a good memo looks like. She developed taste. She started to notice which framings worked and which didn’t. Her next memo was better than her first. By memo 100, she had a kind of judgment that didn’t exist in her when she was hired.
Now consider what AI is best at automating. Drafting. Summarizing. Cleaning datasets. Producing readouts from transcripts. The efficiency case is obvious. Of course you’d automate these tasks. They take a junior person half a day each, and AI can do them in minutes at a fraction of the cost.
But look at what’s on both lists. The tasks AI is automating are the same tasks that used to be the junior analyst’s training ground.
Efficiency gained, learning value lost
Here’s the thing most of the AI adoption conversation misses. Those tasks weren’t just output generation. They were also a curriculum. The inefficiency was the entire point.
Learning value looks like the judgment that forms through repetition. The critical thinking that develops when someone has to argue with themselves before turning in the work. The pattern recognition that comes from having made the mistake once. The quiet confidence of someone who has been corrected enough times to know what corrected feels like. Taken together, these capacities form what we usually call critical thinking. The capacity to evaluate, to judge, to distinguish what matters from what doesn’t. The efficiency gain is measurable. The learning value erosion is not. Companies optimize for what they can see and measure.
When a senior person asks a junior analyst to draft a memo, the output matters. But the deeper work is what happens in the junior analyst’s head while drafting. The struggle to structure an argument, the decisions about what to include and what to cut, the judgment calls about tone and framing. All of that builds the thing called “analytical judgment” or “business sense” or “just knowing what’s important.” Nobody gets it from reading about it. They only get it from doing it.
So when AI drafts the memo, the company gets the output for free. The junior analyst gets the approved document. But nobody gets the learning. The memo was produced, the deliverable was delivered, and the cognitive scaffolding that was supposed to happen just didn’t happen.
Multiply that across a whole cohort for a decade, and what results is a generation of people who have the titles of senior analysts but not the scaffolding that used to come with those titles.
Some tasks, not all tasks
Not all entry-level work is like this. Some of it really was pure busywork. Filing expense reports, formatting slide decks into a template, copying data from one system to another. If AI automates those tasks, nothing is lost because those tasks weren’t teaching anyone anything anyway.
The distinction matters. The question isn’t “should we automate entry-level work” in general. The question is “which entry-level tasks were also training the person doing them, and what are we going to do about those specifically.”
Here’s a rough taxonomy.

Data entry and coordination were mostly output without much learning. Drafting, research and synthesis, and analysis were output AND learning. When AI automates across the first group, you get efficiency with minimal capability loss. When AI automates across the second group, you get efficiency and capability loss together, and the capability loss compounds.

The bar chart makes the trade-off visible. For drafting, research and synthesis, and analysis, the learning value being eroded is greater than the efficiency being gained. That’s the gap to name. The trade-off happening silently, inside the efficiency conversation, without most decision-makers seeing it as a trade-off at all.
The grunt work was the entire point.
Why this matters more than it seems
The reason learning value erosion isn’t visible is twofold. First, it takes a decade to manifest. Second, most companies don’t have a clear definition of what senior judgment actually is, which means they can’t measure its absence even when it’s happening. By the time a company notices that its senior analysts can’t produce the quality of analysis the previous generation did, the AI adoption decision that caused it is 10 years in the rearview mirror and nobody connects the two.
Worse, the decision-makers who get the benefit of the efficiency gain are usually not the same people who absorb the capability cost. The efficiency gain shows up in this quarter’s P&L. The capability cost shows up in the career of a junior person who was never hired, or hired and never trained, and who now doesn’t have the scaffolding to become the senior person the company will need.
This is where the leaders who can tell the difference matter. The ones who notice which tasks were curriculum and which were just output. The ones who protect the curriculum even when it’s less efficient in the short term. Those are the leaders worth betting on.
Part III: The Limits of Market Correction
The most common pushback on this argument is some version of: won’t this fix itself? AI will get better. Companies will adjust. The market will sort it out. The AI evangelists aren’t wrong, exactly. They’re supposed to be positive. It’s literally their job. But positive isn’t the same as accounting for full impact, and they haven’t looked at the same research.
The gap to close
Here’s the visual.

Three lines. Learning value is the teal line, dropping. Each wave of leaders coming up through an AI-assisted career builds less of the specific thinking senior work requires. Not because they’re less smart, or incapable. They just never had to build the judgment.
Leadership capacity is the purple line, rising. The hard problems don’t go away just because AI shows up, and AI itself creates more judgment calls because someone still has to decide when to trust its output and when not to.
AI adoption is the silver line. It rises for a while, because AI really is getting better at work that used to need human experience. Then it levels off. The improvement AI would need to fully cover leadership capacity depends on having enough expert humans around to teach it what only humans know. And that depends on the teal line. The one that’s dropping.
The gap is where leadership capacity outpaces what learning value and AI adoption can build together. Small at first. Invisible for a while. Then, in about eight years, big enough to show up in how companies actually perform. The industries that need leadership capacity most will hit it first. The industries that have been fastest to adopt AI will hit it hardest, because they’ll have spent more years hollowing out the pipeline without anyone noticing. Here’s that gap by itself.

That’s the gap to close. The usual answers don’t work.
This is how it happens
Behind the gap is a feedback loop. Here’s how it plays out through one person’s experience. A young professional, two years into her career, working at her first real job.
Year 1. AI handles a lot of the grunt work on her desk. She still writes memos, but not from scratch the way her predecessor did. She’s still good at her job. She’s just developing less of the skill than an analyst who built it by hand would have, because she didn’t do the reps of building something from nothing. The difference is so small, nobody notices. She gets promoted on schedule.
Year 3. AI has gotten better. She leans on it for more of the research and synthesis work now. She’s still excellent by the standards of her title. The analytical judgment that separates senior analysts from mid-career ones isn’t being built the way it used to. Another small gap, still not a crisis. She’s promoted to Director.
Year 7. AI has moved up the ladder with her. It’s now doing the synthesis work she used to do as a mid-career analyst. Her job now is to mentor entry-level people and provide the judgment AI can’t. She does both well. Her judgment is thinner than the directors who came up before her, because they built it through work that was still done by hand. Nobody can see the difference in any single moment. It only shows up when you compare a whole generation of directors to the one before.
Year 12. Her generation of directors is now the pool companies pull VPs from. Same number of directors. Fewer of them ready for what comes next. Companies start noticing they can’t find the senior talent they used to find. It’s too late to do much about it, because you can’t manufacture a year-12 professional in year 13.
That’s what the chart is showing. Not a layoff. Not a dramatic event. Learning value dropping while demand for leadership capacity climbs. A slow buildup of small differences, compounded across a whole generation, until the aggregate starts breaking the systems that depend on senior people being ready.
No one is making a bad decision
When a company automates an entry-level task, the company doing it comes out ahead. The savings land on its own books this quarter. And next quarter. And the one after that. The cost of a thinner senior pipeline in 10 years isn’t on any report. Companies manage what they measure, which means they don’t manage this at all. That cost gets spread across every future employer, including companies that had nothing to do with the original decision.
This is the kind of problem economists have studied for a long time. The cost of the decision lands on someone other than the person making the decision. That person, rationally, doesn’t factor that cost in. They’re not being careless. They’re just counting the costs they can see. Economists call this an externality.
In July 2025, Enrique Ide at IESE Business School published a paper that put formal math behind this exact problem. His argument, stripped down: companies automate early-career tasks more than is good for their industry, because no single company bears the full cost of doing it. He calls it “socially excessive automation,” which is a mouthful, but the idea is simpler than the phrase. Rational decisions by individual companies produce an irrational outcome for the industry as a whole.
Rational decisions by individual companies produce an irrational outcome for the industry as a whole.
Here’s what that gap costs if it doesn’t close. Ide’s rough estimate is that AI-driven entry-level automation could shave 0.05 to 0.35 percentage points off US per-capita growth per year. That sounds small. Run it out 20 years, compounded, and you’re looking at GDP somewhere between 1% and 7% below where it would otherwise be. That’s trillions of dollars, spread across a population that has no idea this is happening.
The market isn’t catching this because the market is working exactly how it’s supposed to. Every company makes the decision that makes sense for itself. The market isn’t broken. No one is accountable for the collective outcome.
The leaders in the rooms where these decisions are getting made, and who act on what they see before the shortage matures, are the ones whose companies will still have the senior leadership they need.
Part IV: The Compound Effect on Women
The averages hide something. Two problems are hitting women at the same time. Both are structural. They are compounding in a way that neither one alone would predict.
The broken rung
The pipeline for women was already broken before AI arrived. For 11 years, McKinsey and LeanIn.Org have published their annual Women in the Workplace study. Researchers call the first promotion gap “the broken rung.” The December 2025 edition drew on data from 124 companies employing roughly three million people. The finding that anchors this whole part is in the first paragraph of their summary: in 2025, only 93 women were promoted to manager for every 100 men. For women of color, the number was 74. For Black women specifically, 54. 11 years of this study and the broken rung has not closed once.
The gap shows up because entry-level women get less of the type of support that turns into promotions. Only 31% of entry-level women have a sponsor, compared to 45% of men. They are less likely to have a senior colleague put them up for promotion, less likely to be given opportunities to manage people, and less likely to be connected to the informal networks where sponsorship decisions actually get made. None of these gaps close themselves over time. They compound. By the time the cohort reaches senior levels, women hold 29% of C-suite seats. That number hasn’t moved since 2024.
AI’s task profile
Now add AI to a pipeline that already has a broken rung. In March 2026, the International Labour Organization published a research brief on generative AI and the jobs most likely to be automated. Overall, female-dominated occupations are almost twice as likely to be automated as male-dominated ones. 29 percent of jobs in female-dominated fields fall into the at-risk category, compared to 16% for male-dominated ones. The gap is even sharper when you look at the jobs AI is most likely to replace. 16 percent of female-dominated roles versus 3 percent of male-dominated ones. More than five to one.
The reason is occupational segregation, not ability. Women are concentrated in clerical, administrative, and business-support roles, the exact tasks AI is best at automating. Secretaries, receptionists, payroll clerks, accounting assistants, data entry. Men are more represented in construction, manufacturing, and manual trades, where physical work is harder for AI to substitute. The Brookings Institution, analyzing which workers would struggle hardest to find new jobs if AI displaced them, found that 86% of the six million most vulnerable US workers are women.
Those roles were often the stable entry points for women, even highly educated ones. Women earn 59% of US bachelor’s degrees and 63% of master’s degrees. They hold only 48% of entry-level corporate roles. The degrees are there. The jobs the degrees should lead to are not.
The collision
Here is what those two facts produce together. A woman starting her career today faces the pre-existing broken rung. She is less likely to be sponsored, less likely to be put up for promotion, less likely to be given manager opportunities. The same McKinsey research found that young professional women also get less manager support to use AI at work, which matters because in an economy where early AI adopters are rewarded and promoted, an AI skills gap at the starting line compounds across a career. On top of all that, the roles she is statistically most likely to be in are the first ones AI is automating. The cascade hits her harder and earlier because her pipeline was already thinner.
Structural problems hit hardest where the structure was already weakest.
The cascade is the overall picture. This collision shows how uneven that is. Both are true at the same time. The whole pipeline is thinning. The women’s pipeline is thinning faster.
The pattern is stark enough to name plainly. The leadership shortage will hit women first, hit women of color worst, and produce a senior talent pool less diverse than the one we have now. Not because of any decision any company made on purpose. Because two structural problems are running at the same time, on a timeline AI accelerated.
Naming this isn’t the same as accepting it. The leaders who can see the pattern are the ones who can interrupt it.
Part V: Our Work Starts Here
This isn’t a framework for measuring learning loss. There’s no dashboard, no scorecard, no KPI. Honestly, learning value erosion is hard to measure in real time. By the time the numbers show up, the pipeline is already thin.
What follows is something more practical. A set of questions to ask and actions to take so that in the rooms where AI implementation decisions get made, this conversation comes with them.
This isn’t a framework for measurement. It’s a mindset for the rooms where decisions get made.
Pay Attention
The work is paying attention to the conditions over time. These are the patterns that tell you whether the pipeline is holding up or thinning.
Time-to-promotion by cohort, not just by individual. Individual promotion timelines tell you about individual performance. Cohort promotion timelines tell you about the structural conditions. If the junior class of three years ago took four years to reach a particular milestone and this year’s class is taking five, something about the conditions has changed, even if no individual is underperforming.
The scope of what newest senior people can do, compared to earlier senior people. This one is harder to name but worth paying attention to. Current senior analysts came up through a mostly-human pipeline. The next cohort is coming up through a heavily AI-assisted pipeline. Over the next five years, the difference will become visible. The newer seniors will be capable in different ways than the older ones. Not worse, necessarily. Different. If the difference is narrower (they’re great at AI-mediated work but thinner on the judgment layer), that’s a pattern worth naming.
Who’s getting manager support for AI skill-building. Young professional women are getting less manager support to use AI at work. That’s not a women problem. It’s a manager problem, which means it’s a process problem, which means it can be fixed. Pay attention to whether AI skill-building in your organization is being distributed the way you’d want it distributed. If it’s landing disproportionately with people who already have sponsorship, the AI adoption is amplifying the sponsorship gap.
What roles you’re not backfilling. When a junior role goes vacant and the team says “we’ll manage with AI,” that’s not a backfill decision. It’s a pipeline decision, dressed up as a staffing decision. Over five years, enough of these decisions add up to a different shape of team, with different training conditions for whoever is still in it. Pay attention to the cumulative pattern, not just the individual call.
Ask Questions
These are the questions to bring into every corporate AI conversation. They are not exhaustive. They are the ones that surface the learning value conversation without derailing the efficiency conversation that’s already happening.
If we automate this, what will the next generation of senior people not know? This is the cascade question in plain English. Senior judgment is built through reps that happen at the junior level. Removing the reps doesn’t just shift the work. It changes what senior people are capable of in a decade. Bringing this into the meeting doesn’t block the automation. It gets the cost onto the table.
What was this task teaching the person who used to do it? Every task being automated was producing two things: output for the company and learning for the person. If you can’t name what the task was teaching, you probably don’t need to protect it. If you can (drafting builds analytical judgment, research builds synthesis, even formatting a deck builds a sense of narrative structure), you’re looking at a task where the efficiency gain has a hidden cost.
Are we automating the same roles we’ve always underhired into? This is the asymmetry question. AI automation doesn’t hit every occupation equally. If the roles being automated are the same ones that have historically been the stable entry points for women, people without advanced degrees, and people from under-sponsored backgrounds, the automation is doing demographic work that the company is not naming as a decision. But it is a decision.
Who benefits from the efficiency gain, and who absorbs the learning loss? These are usually different groups. The efficiency gain lands on this quarter’s P&L, which benefits shareholders and the leaders who signed off. The learning loss lands on the junior person whose career will be shaped by not having done the reps. This is just accurate accounting.
What’s our plan for producing senior judgment in 10 years? If the pipeline is being thinned today, the plan for what replaces it tomorrow can’t be “we’ll figure it out.” Either the company has a real plan like apprenticeships, structured skill-building, protected reps, deliberate mentorship, or it doesn’t. The absence of a plan is itself an answer.
Take Action
Documentation is where most leaders get this wrong, because they hear “institutional knowledge” and think company history. That’s the wrong target. No one is going to read a nostalgic record of how your company used to run. What matters for the pipeline is something more specific.
Not tribal knowledge. The shape of the work.
What to actually document:
The process, not the output. If an analyst used to produce a first-draft memo from scratch, the memo itself isn’t the thing worth documenting. The steps she took to produce it (how she decided what to include, which sources she pulled, which framings she tried and rejected, how she judged when the draft was good enough) are the thing. When AI produces the memo instead, the process disappears. If it’s documented somewhere, it can still be taught.
The judgment calls, named as judgment calls. Senior people make dozens of small judgment calls a day that they don’t recognize as judgment calls because they’ve been doing them so long the decisions feel automatic. Ask a senior person to narrate, for an hour, what they just decided and why. Write it down. That’s the stuff that never gets captured in process documentation and that AI cannot teach the next person.
The pattern recognition that isn’t in the workflow. Why does this account always need a second check from the partner? Why does this engineering estimate always come in 30% over initial? Why does this team not get told about budget changes until after they’ve been made? The answers to these questions live in senior people’s heads. They’re not in the process docs. When the senior person leaves and AI is handling the visible workflow, the pattern recognition goes with them unless someone wrote it down.
The cost of getting it wrong, not just the steps for getting it right. Most documentation captures the happy path. What makes someone senior is knowing what happens when things go sideways, which risks are recoverable and which aren’t, and which shortcuts create problems that don’t surface for 18 months. Document the failures, the near-misses, and the load-bearing cautions, because those are what AI can’t learn from a clean dataset.
One practical note: this kind of documentation doesn’t have to be polished. A rough Google Doc that a senior person spent 90 minutes on is more valuable than a perfectly formatted process document that omits the judgment layer entirely. Don’t let perfect be the enemy of anything at all.
The Rest Is on Us
The pattern described in these pages is not hypothetical. The data is available. The mechanism is understood. The trajectory is projectable. What has been missing is a clear account of how the pieces connect and what they imply together. That account is now on the record.
The leadership shortage 10 years from now is being built today, in decisions that look reasonable in isolation and become costly in aggregate. Leaders who understand this can act on it. Leaders who do not will inherit its consequences.
The point of this work was never to slow AI adoption. AI is here. It’s going to keep getting better. The point was to name what the adoption is costing in a dimension that isn’t on anyone’s balance sheet yet, so the costs can be weighed against the benefits with eyes open.
Nothing in these pages is a framework. The arguments are arguments. The questions are scaffolding. The actions are starting points. Use them, adapt them, argue with them. The goal is not to have the answers. The goal is to make sure these questions are being asked at all.
The mindset this work is asking for is what I’ve been calling Moonlit Leadership in my other writing. Reflective, aware, willing to pay attention to what everyone else is rushing past. You don’t need to read any of that to use what’s here. But if you want to understand where these questions come from, that’s where.
The future of technology gets planned. The future of the people who run it doesn’t plan itself. The rest is on us.
Sources
Brynjolfsson, Chandar, and Chen. Canaries in the Coal Mine? Six Facts About the Recent Employment Effects of Artificial Intelligence. Stanford Digital Economy Lab, 2025.
Ellingrud, Yee, and Martínez. The Broken Rung: When the Career Ladder Breaks for Women — and How They Can Succeed in Spite of It. Harvard Business Review Press, March 2025.
Ide, Enrique. Automation, AI, and the Intergenerational Transmission of Knowledge. IESE Business School, July 2025.
International Labour Organization. Gen AI, occupational segregation and gender equality in the world of work. March 2026.
Manning, Aguirre, Muro, and Methkupally. Measuring US Workers’ Capacity to Adapt to AI-Driven Job Displacement. Brookings Institution and Centre for the Governance of AI, January 2026.
McKinsey & Company. The State of AI in 2025: Agents, Innovation, and Transformation. November 2025.
McKinsey & Company and LeanIn.Org. Women in the Workplace 2025 (11th annual edition). December 2025.
✦ ✦ ✦
A Note on Framing
A few choices behind this paper are worth naming for the reader.
On the aggregate lens. This paper looks at the leadership shortage at the structural, decade-long level. That choice was deliberate. The same shifts can feel sharp and immediate to the individuals living inside them, and that experience deserves its own attention. It just isn’t the lens this paper takes.
On the assumption of senior human judgment. The argument here rests on a premise: senior human judgment will remain essential to the work organizations do. Some readers will challenge this. The fairest version of the counter-argument is that AI will close the gap on senior work faster than current data suggests. The research currently available doesn’t support that. The paper proceeds on this premise, but the premise is worth naming.
On AI dependency. There is a related concern this paper doesn’t take up: reliance on AI for everyday decision-making is becoming ubiquitous, including among senior people, which compounds the pipeline problem from a different direction. That concern deserves its own paper. This one doesn’t take it up.
✦ ✦ ✦
About the Author
Angela Segovia has spent nearly two decades in leadership roles spanning Chief of Staff, commercial strategy, product strategy, and program and portfolio leadership in global gaming, technology, and educational technology. Before that, she was a librarian. She is the creator of the Moonlit Leadership framework and writes regularly at moonlitleadership.com and on LinkedIn.
✦ ✦ ✦


Leave a Reply