Do It in Open Source Way - The Role and Potential of InnerSource in the AI Era
The Question That Haunts Modern Organizations #
While countless developers debate the merits of prompt engineering and context engineering, while influencers demonstrate their latest AI coding tricks, and while startups pivot toward AI-first development, a glaring gap persists in the discourse. We’re drowning in discussions about individual productivity and small-team tactics, yet we’re starving for guidance on how large, established organizations should navigate the AI transformation.
This isn’t just a big enterprise problem. Even small startups with powerful 10-person AI teams will eventually handle massive codebases and scale into large systems overnight. The fundamental question becomes: how do organizations prepare their source code and collaboration practices to work seamlessly with AI at speed without breaking down?
This isn’t another article about how to write better prompts or optimize your copilot experience. This is about the organizational DNA that will determine whether your company thrives or merely survives in the AI era.
TL;DR: Five Critical Organizational Challenges #
AI-powered development faces five critical organizational challenges that the Open Source Way can address:
The Standardization Dilemma: Organizations want AI to understand their proprietary methods, but AI excels at open standards rather than proprietary ones. The key is recognizing that AI has learned extensively from open, standardized practices.
Quality Assurance Bottleneck: AI generates massive amounts of duplicate code, and humans can’t review it all. Instead of letting AI reinvent the wheel repeatedly, organizations need to prevent duplication by sharing quality-assured code internally and avoiding endless review cycles.
Information Silo Problem: As AI becomes more autonomous, organizations want it to access broader organizational knowledge, but siloed information creates multi-layered access problems. Transparent, non-siloed organizations enable AI to access the information it needs without bureaucratic bottlenecks.
Document Format Chaos: AI struggles with PowerPoint, Excel, and proprietary formats. Open source collaboration naturally gravitates toward Markdown-based documentation and issue-based collaboration—formats that AI can easily parse and understand.
Missing Context Crisis: People give AI snapshot information without the crucial context of “why” decisions were made. Open source culture naturally documents decision-making processes, creating the contextual understanding AI needs to make appropriate suggestions.
Think of AI as a context-free genius engineer who suddenly joined your organization—like an open source contributor who arrived without any background knowledge of your systems, processes, or history. We need to provide organizational mentorship to AI, but this can’t be an individual effort—it requires systematic, organization-wide support that helps AI understand not just what we do, but how and why we do it.
Implementing this Open Source Way within organizations is what we call InnerSource. Open source practices encourage transparent collaboration, shared standards, and community-driven improvement. Open source methodology helps teams naturally gravitate toward practices that AI understands while preserving the institutional knowledge that makes your organization unique. Open source approaches develop strategies for gradually aligning organizations with “AI-known standard methods” while building the organizational resources and individual capabilities needed for this transition. It’s not about forcing change—it’s about creating conditions where change feels natural and beneficial.
1. “Our Way” vs “Standard Way” #
Picture this: Your organization has spent years perfecting its code review process, documentation standards, and testing methodologies. They’re not just practices—they’re part of your organizational identity. Then AI arrives, and suddenly it doesn’t understand your carefully crafted conventions. It generates code that follows PEP8-ish style, not your custom Python style guide. It writes tests in Jest patterns, not your proprietary testing framework.
Of course, you could teach AI your specific ways, but it’s obviously easier to leverage the zero-shot knowledge it already possesses. That’s why most people end up gravitating toward Bootstrap, Tailwind, and other well-established patterns—because it’s simply more efficient.
The Uncomfortable Truth #
AI doesn’t know your proprietary information. It wasn’t trained on your internal coding standards, your custom frameworks, or your unique architectural decisions. It speaks the language of open source—the common tongue of developers worldwide that has been extensively documented and shared.
This creates an immediate friction point. Organizations invested heavily in their “special way” of doing things, often for good reasons. Maybe your coding standards emerged from painful debugging sessions. Perhaps your documentation format evolved to meet specific compliance requirements. These aren’t arbitrary choices—they’re institutional wisdom crystallized into process.
The Short-Term Solution: Embrace Standards #
The pragmatic answer, at least for now, is standardization. Adopt PEP8 for Python. Use conventional commit messages. Follow established testing patterns. Structure your documentation in formats that AI can parse and understand.
This isn’t capitulation—it’s pragmatism. When AI generates code that aligns with your standards, the friction disappears. As context windows expand dramatically, you’ll eventually be able to dump all your source code and proprietary information into the context anyway. Code reviews become smoother. Integration becomes seamless. Your developers spend less time fighting with AI-generated code and more time leveraging its capabilities.
The Long-Term Reality: AI Will Learn Your Way #
But here’s the nuance that most discussions miss: this is likely a temporary problem. AI systems are rapidly improving at understanding context and proprietary information. Fine-tuning, improved in-context learning, and longer context windows will eventually allow AI to absorb your organizational quirks.
The question becomes: Is it worth the organizational upheaval to solve a problem that may resolve itself?
InnerSource as the Bridge #
This is where InnerSource becomes invaluable. InnerSource doesn’t demand that you abandon your organizational identity overnight. Instead, it provides a framework for gradual transition—helping your Red Riding Hood find a path that’s both safe and efficient.
InnerSource isn’t about writing code for yourself—it’s about writing for your team, for the broader organization, for neighboring teams, and for teams one or two hops away. It means writing code that everyone can read easily, whether they’re new junior engineers or experienced, seasoned professionals. This philosophy extends beyond just code to in-code documentation and architectural decisions.
InnerSource encourages the adoption of open source practices within your organization: transparent collaboration, shared standards, and community-driven improvement. It helps teams naturally gravitate toward practices that AI understands while preserving the institutional knowledge that makes your organization unique.
The methodology develops strategies for gradually aligning organizations with “AI-known standard methods” while building the organizational resources and individual capabilities needed for this transition. It’s not about forcing change—it’s about creating conditions where change feels natural and beneficial.
2. The Quality Assurance Bottleneck: When AI Outpaces Human Review #
This isn’t really a secret—everyone is struggling with this inconvenient truth. AI capabilities keep expanding exponentially, but human cognitive abilities remain relatively static. While AI can certainly assist with code comprehension and make reviews more efficient, there are fundamental limits to human processing capacity that we can’t engineer away.
AI can generate a thousand lines of code in seconds. A skilled developer might review a few hundreds lines in an hour. The math doesn’t work, and it’s getting worse as AI capabilities improve.
The Reviewing Problem Is Hard to Scale #
Writing tests can certainly improve this situation significantly, and the consensus from many organizations is that tests have become more critical than ever—they serve as essential guardrails in an AI-assisted development world. Even if AI generates test code alongside implementation code, someone still needs to review those tests. Even if AI explains its reasoning, someone needs to verify that reasoning. The fundamental constraint remains: human cognitive bandwidth.
Traditional quality assurance assumes scarcity—that code is expensive to write and therefore worth careful review. But when code becomes cheap to generate, our quality models break down completely.
The Solution: Quality-Assured Code Sharing #
The key insight is preventing AI from reinventing the wheel repeatedly. Instead of letting every AI solve the same problems and generate similar code, create repositories of reviewed, tested, and approved code components that teams can reuse.
When you have many shareable parts like in open source and InnerSource environments, something interesting happens: various people end up using those tools and code components. Quality gets assured through collective usage—many eyes end up examining that code, finding issues, and improving it over time.
This approach requires a fundamental shift in mindset. Code becomes less about individual ownership and more about collective resource management. However, this means implementing weak code ownership rather than collective code ownership—because when everyone owns something, no one truly owns it. This implies we also need a culture of properly maintaining source code.
But here’s the good news: AI can now handle much of source code maintenance. The real question is how organizations will own and steward such shared code repositories.
Teams need to think beyond their immediate needs and consider how their solutions might benefit others across the organization.
InnerSource Enables Systematic Sharing #
InnerSource provides the cultural foundation for this transformation. It encourages developers to think like open source maintainers—not just writing code for their immediate needs, but creating solutions that others can understand, modify, and improve.
This isn’t just about code libraries. It’s about creating frameworks for identifying which code deserves quality assurance investment, processes for maintaining shared repositories, and cultural practices that encourage contribution and reuse.
The methodology addresses the balance between automation and human oversight, helping organizations develop sustainable practices for AI-generated code integration while maintaining quality standards.
3. The Information Silo Problem: AI’s Knowledge Hunger #
Organizations dream of AI that knows everything—an artificial employee with access to all departmental knowledge, capable of exceptional cross-functional work. But this dream crashes against the reality of information silos.
The Multi-Layered Access Challenge #
Consider your organization as a Venn diagram. Department X has access to certain information, Department Y to different information, Department Z to yet another set. The intersection—information accessible to all departments—is often surprisingly small.
When you try to create “organizational AI,” you hit this limitation immediately. Current RAG implementations optimize information per department, but they struggle with search accuracy and cross-departmental context. Each department gets its own AI assistant, but none of them can truly understand the organization as a whole.
You might think this isn’t a big deal because the projects you want AI to reference might fit within one circle of a Venn diagram. But this isn’t just about source code access—it’s a multi-layered, multi-stage problem that goes much deeper.
Your organization might use Notion for some projects, Office 365 for others. Some teams use GitHub, others use GitLab. There are differences between people who have licenses and those who don’t. When these different systems need to collaborate, problems multiply. Even when employees work on the same project, their access levels to information might differ dramatically based on their role, seniority, or department.
In the short term, AI will likely remain personal—individuals will handle their own AI interactions. In such cases, lack of access to organizational information, or the lead time required to get permissions to access organizational information, becomes a critical bottleneck that limits AI effectiveness.
The Power of Information Overlap #
The solution isn’t giving AI access to more information—it’s increasing the overlap in the Venn diagram. The larger the intersection of shared information between departments, the more powerful your organizational AI becomes.
This requires cultural transformation. Organizational members might keep much information in their personal Google Drives or local storage. Without proper rules and cultural shifts, employees, engineers, and product owners will naturally default to keeping information in their personal possession rather than making it organizationally accessible.
Employees need to shift from hoarding information to sharing it. Departments need to move from protecting their knowledge to contributing to organizational intelligence.
Security and Access Considerations #
This doesn’t mean removing all access controls or creating security vulnerabilities. It means thoughtfully expanding access to information that can safely be shared while maintaining appropriate boundaries for sensitive data.
The challenge is cultural as much as technical. AI can only handle formalized information—it cannot access tacit knowledge or information that individuals hoard. Therefore, enabling open, transparent collaboration becomes extremely important.
However, showing your thoughts, resources, unfinished work, and documents you’re not confident about to many people creates significant barriers, including psychological ones. That’s why training that makes such practices feel natural and safe is essential.
Information sharing requires trust, and trust requires time to build. Organizations need frameworks for gradually expanding information access while maintaining security and privacy requirements.
InnerSource Breaks Down Barriers #
InnerSource excels at breaking down information silos because it’s fundamentally about creating open, collaborative environments within organizations. It provides proven practices for knowledge sharing, contribution management, and community building.
The methodology helps organizations develop trust and security models for broader information access while creating cultural transformation programs that encourage open information sharing. It addresses the reality that information access changes can’t be implemented overnight and requires sustained cultural adoption.
4. Document Format Chaos: The Markdown Revolution #
Your organization has decades of institutional knowledge locked in PowerPoint presentations, Excel spreadsheets, complex Word documents, JIRA tickets, Confluence pages, and Notion databases. You want to feed all of this to AI, but here’s the problem: format diversity creates accuracy nightmares.
The AI Accessibility Challenge #
To AI, a PowerPoint file is just XML and image files. It lacks semantic understanding of your carefully crafted slides. Excel spreadsheets become data soup without context. Complex documents lose their structure and meaning when processed by current AI systems.
Image processing accuracy still has significant room for improvement, and platform walls create additional barriers. Your knowledge is scattered across multiple systems with different APIs, search capabilities, and access controls.
The Radical Solution: Markdown and GitHub Centralization #
The answer sounds almost absurdly simple: write everything in Markdown and centralize everything in GitHub (or similar version-controlled platforms).
This recommendation might trigger immediate resistance. What about rich formatting? What about complex visualizations? What about our existing workflows?
But consider the benefits: fewer locations for AI to access, semantic structure that AI can understand, built-in version control and collaboration features, linkable and searchable content, and maintainable documentation over time.
The Migration Challenge and Gradual Approach #
Moving from rich documents to Markdown represents a significant migration effort and cultural shift that essentially asks organizations to update processes and long-cultivated information repositories in favor of simpler documentation formats. This challenge parallels the difficulty organizations face when trying to transition from traditional project management approaches (PowerPoint-based planning, Excel tracking) to issue-based and design document-driven development workflows.
However, this isn’t an all-or-nothing proposition. Rather than choosing between “all PowerPoint and Excel” versus “all Markdown,” organizations should focus on gradually increasing AI-readable information formats. The characteristics of management systems matter too—systems that can keep information relatively flat are more ideal than those requiring complex hierarchical permissions.
While platforms that support multi-layered permissions for enterprise governance are certainly important, increasing the portion of information that can be managed with high transparency within the organization benefits everyone. This is about finding the right balance and using appropriate tools for different purposes, not making binary choices.
Teams need to learn new tools and workflows. Complex documents need to be restructured. Permission systems need to be redesigned. Yet organizations that make this transition report surprising benefits beyond AI integration: improved collaboration, better version control, more accessible documentation, and reduced tool complexity.
InnerSource Provides the Framework #
InnerSource provides proven strategies for this kind of organizational transformation. It offers migration strategies that maintain document fidelity while improving AI accessibility, unified information architecture principles, and open-source-inspired documentation practices.
The methodology acknowledges the trade-offs between rich documents and AI accessibility while providing pathways for gradual transition that minimize disruption.
5. The Missing Context Crisis: Understanding the “Why” #
AI knows the “what” but not the “why.” It sees snapshots of completed work but lacks the context of how and why decisions were made. This limitation creates significant problems for AI-assisted development.
The Snapshot Problem #
Many people give AI snapshot information and expect it to understand the full context, but this approach fails because it lacks the crucial “why” behind decisions. When organizations need to solve problems, there are typically massive amounts of information and numerous potential solutions available. Even when alternative solutions exist, there are usually extensive reasons why those solutions weren’t chosen previously—but this reasoning is rarely documented comprehensively.
Current AI systems see finished code but not the development process. They know that a function exists but not why it was written in a particular way. They can identify “inefficient” code but can’t distinguish between genuinely problematic code and code that’s deliberately structured for specific reasons.
This creates dangerous scenarios where AI suggests “improvements” that break carefully constructed solutions or removes “redundant” code that serves important but non-obvious purposes.
The Informal Knowledge Gap #
Much of the valuable context exists in informal communications: GitHub issue discussions, Slack conversations, Microsoft Teams threads, hallway conversations, and design decisions made in meetings. This institutional knowledge is often inaccessible to AI systems or gets lost over time, yet it’s crucial for understanding why code exists in its current form.
New team members often can’t understand why certain implementations should be avoided, and AI faces the same limitation. This historical context—documenting not just what was decided but why alternatives were rejected—is valuable for both human contributors and AI systems.
Creating AI-Accessible Decision Trails #
The solution requires creating systems to capture and make decision-making processes accessible to AI. This doesn’t mean recording every conversation, but it does mean formalizing important decisions and their reasoning.
In open source projects, when decisions are made in completely different contexts or platforms, new contributors find it extremely difficult to understand how implementations were realized or how current decisions were made. Such barriers end up hindering contributor participation and making contributions more difficult. AI faces identical challenges.
This involves both technological challenges (integrating with communication systems) and cultural challenges (encouraging documentation of decision-making processes).
InnerSource Culture Naturally Documents Decisions #
Open source projects excel at documenting decisions because transparency is fundamental to their success. Contributors need to understand not just what code does, but why it exists and what problems it solves.
InnerSource brings this culture inside organizations. It encourages teams to document their reasoning, discuss decisions openly, and create audit trails that preserve institutional knowledge.
The methodology provides decision documentation frameworks, processes for formalizing informal communications, and practices for linking code changes to business decisions.
The Reality of Organizational Constraints #
Many of these challenges will likely be solved by technology in the short to medium term. Improved AI capabilities, better integration tools, and enhanced context understanding will address some of these issues automatically.
But organizations can’t wait for perfect solutions. They face immediate pressures to leverage AI capabilities while managing real constraints: budget limitations, risk aversion, regulatory requirements, and the simple reality that changing large organizations takes time.
The Actionability Problem #
When these discussions arise, sometimes drastic recommendations get proposed. I remember when I was at Microsoft, we had a customer struggling with advancing in-house development capabilities. When we brought a Microsoft executive to meet with the customer, her suggestion was straightforward: “Since you’re a large company, why don’t you just acquire companies with lots of cutting-edge engineers?”
That recommendation was probably correct, but…
It’s easy to make dramatic recommendations: “Buy innovative companies,” “Rebuild your systems,” “Replace resistant employees,” “Hire AI experts.” But most organizations can’t easily implement such suggestions.
Such opinions are probably considered correct on social media, and in reality, it would probably be ideal for visionary CEOs to execute such transformations rapidly. So that argument is definitely right.
But actual enterprise leaders and middle managers in real companies already know this. They know, they know. Yet there are massive reasons why they can’t execute these solutions. They can’t justify major acquisitions to shareholders. They lack the talent for successful post-merger integration. They need expensive consultants for major system overhauls. They’re constrained by existing contracts, compliance requirements, and operational dependencies.
The companies that can’t follow dramatic advice aren’t necessarily wrong—they’re operating within real constraints that “advisors” often ignore.
The Gradual Transformation Imperative #
This is why methodologies matter. Organizations need frameworks for gradual transition, supported by passionate leaders, enthusiastic contributors, and sustained cultural evolution.
Changing yourself is relatively simple. Changing environments, other people, and entire departments is genuinely difficult. Yet organizations must move forward despite these constraints.
The John Problem #
You, reading this, probably have a growth mindset and are actively seeking new AI topics. If you’re a highly paid engineer who considers such developments natural, you’re definitely going to leverage that growth mindset to continuously improve performance. You probably think naysayers don’t belong in organizations.
But think about John in the neighboring team. His voluntary cooperation in growth initiatives is questionable. He’s not incompetent—he’s reasonably capable but requires more effort to motivate, or he’s excellent elsewhere but seemingly unmotivated in YOUR area because it doesn’t directly benefit him.
This isn’t necessarily about individual performance—it’s an organizational problem. How do you create conditions where John wants to participate in AI transformation? How do you align incentives so that cooperation feels natural rather than forced?
The Expanding Definition of “Engineer” #
InnerSource was originally designed as an engineering methodology for handling source code, information, and collaboration while encouraging new contributors to participate in development ecosystems. But the definition of “engineer” is clearly expanding.
When Ruby on Rails was developed, “framework users” became part of the engineering community. Rails provided their entry point into software development. Now, “Vibe Coding” and AI-assisted development represent new entry points for engineers.
As more people become involved in “engineering,” traditional boundaries blur. People previously considered “non-engineers” now participate in code creation, system design, and technical decision-making.
You might still think there’s a clear boundary between non-engineers and engineers. While I understand the skepticism about whether non-engineers can suddenly acquire engineer-equivalent capabilities without substantial learning, the undeniable fact is that entry barriers are continuously decreasing, and barriers to participation are getting lower.
The Democratization of Software Creation #
This expansion mirrors previous technological shifts. Just as Ruby on Rails democratized web development by providing powerful abstractions, AI is democratizing software creation by reducing the technical barriers to code generation.
This democratization creates new challenges. How do you maintain quality when more people can create software? How do you ensure security when the barrier to system modification is lower? How do you preserve institutional knowledge when the technical workforce is more diverse?
InnerSource as Organizational Framework #
InnerSource provides answers to these challenges because it’s fundamentally about managing diverse communities of contributors with varying skill levels and motivations. It offers proven practices for onboarding new contributors, maintaining quality standards, and preserving institutional knowledge.
The methodology becomes increasingly vital as “engineering” expands to include AI-assisted developers. It provides the cultural and methodological framework for managing this new reality.
Conclusion: The Open Source Way as AI Strategy #
The future belongs to organizations that can successfully blend their unique knowledge and processes with AI capabilities. This isn’t about choosing between human expertise and artificial intelligence—it’s about creating synergistic relationships that amplify both.
The Open Source Way is the key to successful AI collaboration. Organizations that embrace transparency, encourage contribution, document decisions, share knowledge, and build communities will thrive in the AI era.
InnerSource, as the organizational embodiment of open source principles, provides the framework for this transformation. It addresses the fundamental challenges of information sharing, quality assurance, accessibility, and context preservation that organizations face when integrating AI into their development processes.
The Path Forward #
This isn’t about implementing InnerSource overnight or forcing dramatic organizational changes. It’s about gradually adopting practices that make your organization more AI-friendly while preserving the knowledge and culture that make you unique.
Start small. Choose one team or one project. Begin sharing code more openly. Document decisions more thoroughly. Standardize where it makes sense. Build trust through transparency.
The organizations that master this balance—between openness and security, between standardization and uniqueness, between AI capabilities and human judgment—will define the next era of software development.
The question isn’t whether AI will transform how we build software. It’s whether your organization will be shaped by that transformation or will help shape it.
The choice, as always, is yours. But the Open Source Way provides a proven path forward.