Teams will increasingly include both humans and AI agents. We need to learn how best to configure them. A new Stanford University paper "ChatCollab: Exploring Collaboration Between Humans and AI Agents in Software Teams" reveals a range of useful insights. A few highlights: 💡 Human-AI Role Differentiation Fosters Collaboration. Assigning distinct roles to AI agents and humans in teams, such as CEO, Product Manager, and Developer, mirrors traditional team dynamics. This structure helps define responsibilities, ensures alignment with workflows, and allows humans to seamlessly integrate by adopting any role. This fosters a peer-like collaboration environment where humans can both guide and learn from AI agents. 🎯 Prompts Shape Team Interaction Styles. The configuration of AI agent prompts significantly influences collaboration dynamics. For example, emphasizing "asking for opinions" in prompts increased such interactions by 600%. This demonstrates that thoughtfully designed role-specific and behavioral prompts can fine-tune team dynamics, enabling targeted improvements in communication and decision-making efficiency. 🔄 Iterative Feedback Mechanisms Improve Team Performance. Human team members in roles such as clients or supervisors can provide real-time feedback to AI agents. This iterative process ensures agents refine their output, ask pertinent questions, and follow expected workflows. Such interaction not only improves project outcomes but also builds trust and adaptability in mixed teams. 🌟 Autonomy Balances Initiative and Dependence. ChatCollab’s AI agents exhibit autonomy by independently deciding when to act or wait based on their roles. For example, developers wait for PRDs before coding, avoiding redundant work. Ensuring that agents understand role-specific dependencies and workflows optimizes productivity while maintaining alignment with human expectations. 📊 Tailored Role Assignments Enhance Human Learning. Humans in teams can act as coaches, mentors, or peers to AI agents. This dynamic enables human participants to refine leadership and communication skills, while AI agents serve as practice partners or mentees. Configuring teams to simulate these dynamics provides dual benefits: skill development for humans and improved agent outputs through feedback. 🔍 Measurable Dynamics Enable Continuous Improvement. Collaboration analysis using frameworks like Bales’ Interaction Process reveals actionable patterns in human-AI interactions. For example, tracking increases in opinion-sharing and other key metrics allows iterative configuration and optimization of combined teams. 💬 Transparent Communication Channels Empower Humans. Using shared platforms like Slack for all human and AI interactions ensures transparency and inclusivity. Humans can easily observe agent reasoning and intervene when necessary, while agents remain responsive to human queries. Link to paper in comments.
How to Manage AI Coding Tools as Team Members
Explore top LinkedIn content from expert professionals.
Summary
Managing AI coding tools as team members means treating AI not just as software, but as collaborative partners in your team’s workflow. This approach helps teams define clear roles, build trust, and improve productivity by integrating AI into daily tasks and decision-making processes.
- Clarify roles: Assign specific responsibilities to AI agents and human members to prevent confusion and encourage better cooperation.
- Create feedback loops: Set up regular channels for reviewing AI-generated work so your team can offer guidance, ask questions, and keep projects on track.
- Promote peer learning: Encourage team members to share their experiences and insights, helping everyone grow comfortable working alongside AI teammates.
-
-
Stop Treating AI Like a Tool, Start Onboarding It Like a Teammate! 🚀 Are you struggling to get real value from AI in your team? The problem might not be the technology, but how you're integrating it. Just like a new hire, AI needs clear roles, training, and ongoing feedback to truly thrive. : * Define clear responsibilities: What specific tasks will the AI handle? * Invest in "AI literacy": Everyone on the team needs to understand AI's capabilities and limitations. * Establish communication protocols: How will the AI share its insights and when will it need help? * Provide continuous training and feedback: Help the AI learn and improve, just like you would with any team member. * Foster collaboration and trust: Encourage teamwork between humans and AI. * Iterate and adapt: Be flexible and adjust your approach as the AI evolves. * Address ethical considerations: Be mindful of bias and ensure fairness. The key takeaway? Treat AI as a partner, not just a tool. Build a collaborative environment where AI can flourish, and you'll unlock its true potential.
-
The era of AI tools is over. Welcome to AI teammates. We’re now building autonomous agents that operate like team members. These agents are more than personas. They're modular, trained, role-specific assistants that can: - Execute repeatable workflows - Interpret and adapt based on uploaded data - Hold persistent memory of your style, tone, or SOPs - Integrate with APIs, tools, and automation stacks Here’s how to leverage them strategically — not just play with them: ✅ 1. Treat your agent like you're hiring an ops lead Think in terms of delegation, not automation. Write a role description. Define its scope. Explain what “done well” looks like. The clearer the initial “onboarding,” the better the performance. ✅ 2. Build with process, not just prompts Upload reference documents (templates, decks, SOPs). Guide it through your systems and workflows. Remember: AI needs context to become competent. ✅ 3. Anchor it to a specific business function General assistants give general outputs. But an “Investor Memo GPT” or “Weekly Analytics GPT” gets to business faster. Function > title. ✅ 4. Use feedback loops aggressively Agents improve with structured input. Keep a running log of breakdowns, weak spots, and edge cases. Update your instructions like you would a knowledge base or playbook. ✅ 5. Operationalize with real stakes Move beyond play. Deploy agents where they reduce real friction: Client onboarding, lead follow-ups, performance reports, etc. Start with low-risk, high-frequency tasks. Then scale. This isn’t another toy. This is the beginning of a new interface between leadership and execution. 💡 Want to see the full framework I use to deploy GPT agents across sales, content, and research ops? 📩 Subscribe here to get it → https://lnkd.in/gCV3_Raw
-
Throwing AI tools at your team without a plan is like giving them a Ferrari without driving lessons. AI only drives impact if your workforce knows how to use it effectively. After: 1-defining objectives 2-assessing readiness 3-piloting use cases with a tiger team Step 4 is about empowering the broader team to leverage AI confidently. Boston Consulting Group (BCG) research and Gilbert’s Behavior Engineering Model show that high-impact AI adoption is 80% about people, 20% about tech. Here’s how to make that happen: 1️⃣ Environmental Supports: Build the Framework for Success -Clear Guidance: Define AI’s role in specific tasks. If a tool like Momentum.io automates data entry, outline how it frees up time for strategic activities. -Accessible Tools: Ensure AI tools are easy to use and well-integrated. For tools like ChatGPT create a prompt library so employees don’t have to start from scratch. -Recognition: Acknowledge team members who make measurable improvements with AI, like reducing response times or boosting engagement. Recognition fuels adoption. 2️⃣ Empower with Tiger Team Champions -Use Tiger/Pilot Team Champions: Leverage your pilot team members as champions who share workflows and real-world results. Their successes give others confidence and practical insights. -Role-Specific Training: Focus on high-impact skills for each role. Sales might use prompts for lead scoring, while support teams focus on customer inquiries. Keep it relevant and simple. -Match Tools to Skill Levels: For non-technical roles, choose tools with low-code interfaces or embedded automation. Keep adoption smooth by aligning with current abilities. 3️⃣ Continuous Feedback and Real-Time Learning -Pilot Insights: Apply findings from the pilot phase to refine processes and address any gaps. Updates based on tiger team feedback benefit the entire workforce. -Knowledge Hub: Create an evolving resource library with top prompts, troubleshooting guides, and FAQs. Let it grow as employees share tips and adjustments. -Peer Learning: Champions from the tiger team can host peer-led sessions to show AI’s real impact, making it more approachable. 4️⃣ Just in Time Enablement -On-Demand Help Channels: Offer immediate support options, like a Slack channel or help desk, to address issues as they arise. -Use AI to enable AI: Create customGPT that are task or job specific to lighten workload or learning brain load. Leverage NotebookLLM. -Troubleshooting Guide: Provide a quick-reference guide for common AI issues, empowering employees to solve small challenges independently. AI’s true power lies in your team’s ability to use it well. Step 4 is about support, practical training, and peer learning led by tiger team champions. By building confidence and competence, you’re creating an AI-enabled workforce ready to drive real impact. Step 5 coming next ;) Ps my next podcast guest, we talk about what happens when AI does a lot of what humans used to do… Stay tuned.
-
One of the biggest challenges with using AI coding tools like Aider and Cursor in brownfield projects is the time lost in setting context. Every time a new developer (or even an AI assistant) joins the project, they have to figure out which files are needed for a particular task and how they connect. We tried something simple, and it made a huge difference. 📌 Instead of letting AI generate code and moving on, we ask it to document what each file does once a task is completed. We commit this to a context.yaml file alongside the code. The next person—or AI tool—that needs to work on it has instant context. No more digging through files trying to understand what’s happening. 📌 Another small but effective hack: saving useful AI prompts as part of the codebase. If we find a great prompt for generating Swagger docs, writing a new API, or refactoring legacy code, we commit it in a /prompts/ folder. It’s like leaving behind a playbook that speeds up future work. 📌 The best part? Now, you can ask the AI agent which files to include for a given task. Instead of scanning the entire codebase, the AI can use the context.yaml to suggest the right files. AI in collaboration is much more powerful than individual capabilities. These small changes have saved us hours of effort. AI is great at writing code, but it’s even better when we help it understand the project. How do you manage context when using AI in brownfield projects? I'd love to hear what’s working for you. 👇
-
Scaling AI Code Tooling at Enterprise Scale: Beyond the Hype & FOMO 🚀🤖💡 Deploying AI code generation across thousands of developers isn’t about chasing every shiny new feature; it’s about thoughtful, scalable implementation that delivers real value. I have discovered that actual enterprise-wide AI adoption hinges on these five critical pillars: 1. Seamless Existing IDE Integration Meet developers in their preferred and existing IDEs, don’t force a change of workflow. Embedding AI where teams already work maximises adoption. 2. Context Management Go beyond simple relevance tuning by focusing on robust context management. AI tooling must understand the developer’s immediate coding context, project history, and enterprise-specific patterns to minimise noise and maintain developer flow and productivity. 3. Structured Enablement Programs Roll out enablement programs with clear support channels so all 2,000+ developers can extract genuine value, not just experiment. Empower teams with training, documentation, and a fast feedback loop. 4. Enterprise-Grade Security, AI Governance & IP Protection Security isn’t just a checkbox. We embed cybersecurity, AI governance, and intellectual property safeguards into every layer, from robust data privacy and continuous monitoring to clear IP ownership and compliance. By handling these critical aspects centrally, we free our developers to focus on building great software. They don’t have to worry about security or compliance, as it’s built in! 5. Comprehensive Metrics Frameworks Measure what matters: completion rates, bug reduction, and time saved. Leveraging tools like the DX AI Measurement Framework has proven potent, providing deep and actionable insights into how AI code tooling impacts developer experience and productivity. These frameworks enable us to track real ROI, identify areas for improvement, and continuously refine our approach to maximise value. Successful adoption comes not from FOMO-driven adoption of every new AI feature but from consistent, pragmatic implementation that truly enhances developer productivity at scale. #ai #EnterpriseAI #DevEx #AICodeGeneration #TescoTechnology #Engineering #ArtificialIntelligence #DeveloperExperience
-
The New Job of Engineering Managers When AI Joins the Team AI has quietly become the newest member of every engineering team. The question is no longer “should we use AI” but “how do we lead when AI is part of the team’s workflow.” Engineering managers now have a different job than even two years ago. It’s not about replacing engineers with models. It’s about building a team that works with AI the same way they work with testing tools, build systems, or cloud services. Here is what an AI native engineering manager actually does: 1. Shift the team from task output to system thinking Anyone can generate code with AI. Only strong teams can design systems. Your job is to help your engineers zoom out, reason about architecture, tradeoffs, failure modes, and long-term maintainability. AI handles typing. Humans handle thinking. 2. Build workflows where AI removes cognitive load Teams that win are the ones who stop treating AI as a “code machine” and start using it for reviews, scaffolding, debugging, documentation, architecture diagrams, and learning. Managers must set up these workflows so engineers spend their energy on design, not boilerplate. 3. Coach for judgment, clarity, and decision making AI can draft five options. Only an engineer with good instincts can choose the right one. Your role becomes less about unblocking tickets and more about strengthening judgment and reasoning under ambiguity. 4. Redefine collaboration norms AI creates parallel streams of work. Context gets scattered. Good managers create rituals where engineers explain decisions, record assumptions, and keep the team aligned even when AI is moving everything faster. 5. Protect quality and long-term health AI can generate ten times more code. Without stronger review, testing, and standards, you inherit ten times more tech debt. Your job is protecting the codebase from hidden risk while still unlocking speed. 6. Make experimentation normal AI workflows evolve weekly. The best managers create an environment where trial, error, and iteration feel natural. Teams learn together instead of pretending they have it figured out. AI will not replace engineering managers. But managers who ignore AI will slowly lose relevance. The teams that thrive will be the ones where humans design the system and AI accelerates the work. That’s the new job. And it’s a good one. #EngineeringLeadership #AINativeTeams #FutureOfWork #SoftwareEngineering #TechLeadership #AIInEngineering #EngineeringManagement #TeamCulture #AIProductivity #BuildBetterTeams
-
Six months ago, I was hacking together simple automations on weekends. No versioning. No rollback. Everything broke constantly. Now? I have a set of AI agents that: Debug my vibe-coded scripts Draft product specs based on feedback Organize support tickets and escalate issues automatically Generate customer research summaries Even QA their own work All while I sleep. Here’s what changed: 𝗟𝗲𝘀𝘀𝗼𝗻 𝟭: 𝗔𝗜 𝗶𝘀 𝗻𝗼𝘁 𝗷𝘂𝘀𝘁 𝗮 𝘁𝗼𝗼𝗹, 𝗶𝘁’𝘀 𝗮 𝘁𝗲𝗮𝗺 𝗺𝗲𝗺𝗯𝗲𝗿 I stopped thinking about AI as a feature, and started designing workflows where it owns real responsibilities. 𝗟𝗲𝘀𝘀𝗼𝗻 𝟮: 𝗧𝗿𝗲𝗮𝘁 𝗽𝗿𝗼𝗺𝗽𝘁𝘀 𝗹𝗶𝗸𝗲 𝗽𝗿𝗼𝗰𝗲𝘀𝘀 𝗱𝗼𝗰𝘀 The better I got at prompting, the more my agents delivered. I now write prompts like onboarding guides: clear, scoped, with context and boundaries. 𝗟𝗲𝘀𝘀𝗼𝗻 𝟯: 𝗣𝗶𝗰𝗸 𝘁𝗵𝗲 𝗿𝗶𝗴𝗵𝘁 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁 I use LiquidMetal's agentic platform because it supports production-grade workflows with versioning, rollback, and data governance — not just pretty demos. You don’t need to code to build an AI team. But you do need to lead it like one. TAKEAWAY: If you're a founder, PM, or operator drowning in repeatable tasks, maybe it's time to stop hiring… and start orchestrating. AI isn’t replacing your team — it is your team.
-
Playbook for Managing Your Gen AI & Agentic AI Team Members In our evolving work landscape, we’ve learned that no one person holds all the answers. Whether in human teams or among AI tools, relying on a single source can lead to blind spots—and yes, even hallucinations. Imagine if you approached AI the way you manage your team. Instead of trusting just one tool, what if you curated a group of specialized AI assistants? Think of them as your team members, each bringing unique strengths. For example, I use ChatGPT, Copilot, Gemini, NotebookLM, Grok, Perplexity, and Midjourney—each tool plays a different role. Some help me brainstorm ideas, others generate structured content, and some validate accuracy. By treating these AI tools as collaborators, I create a maker-checker system, where insights are cross-verified for reliability. ✅ Reduces hallucination ✅ Enhances reliability ✅ Boosts productivity This approach isn’t just about using AI—it’s about reimagining how we work. I hope one day Microsoft Teams, Discord, Slack, or Jira will allow us to add these AI assistants into a single "team"—so instead of jumping between platforms, I could collaborate with all my AI colleagues in one seamless thread. It’s time we think beyond a single AI tool and start managing AI like a high-performing team. Are you already working with AI in a similar way? #GenAI #AILeadership #FutureOfWork #AIWorkflow