| Episode | Status |
|---|---|
| Episode | Status |
|---|---|
Today's guest is Kun He, Lead Scientific Advisor at Bayer Crop Science. He joins Emerj Editorial Director Matthew DeMello to discuss how AI is transforming human talent and workforce development in ag...
Kun He, Lead Scientific Advisor at Bayer Crop Science, discusses balancing AI-driven efficiency with human expertise in agricultural manufacturing. The conversation explores how AI systems require transparency and human oversight to avoid false confidence in predictions, the critical role of regulations in preventing innovation disasters while not stifling progress, and the importance of maintaining human decision-making authority. Key insights include using AI as a copilot to check biases, integrating genotyping/phenotyping data for faster crop breeding, and treating regulatory frameworks as essential balance mechanisms rather than innovation barriers.
Discussion of how agricultural professionals maintain decision-making control while leveraging AI for efficiency. Emphasizes that no one is blindly giving AI the driver's seat in product development, with scientists and breeders using AI as one input among many including brainstorming and peer consultation.
Addresses the negative public perception of companies like Monsanto and draws parallels to AI adoption fears. Highlights how both optimistic early adopters and cautious skeptics are necessary for balanced innovation, and emphasizes the shared interest between corporations and the public in safe products.
Explores Bayer's voluntary transparency efforts and the challenge of balancing public disclosure with competitive advantage. Discusses how major companies have social responsibility to ensure product safety while maintaining proprietary information necessary for healthy market competition.
Discusses the thalidomide tragedy that triggered modern pharmaceutical regulations, illustrating how innovations initially thought beneficial can have devastating unintended consequences. Emphasizes that regulatory frameworks emerged from real disasters and continue evolving to protect not just humans but endangered species.
Uses Mary Shelley's Frankenstein as a metaphor for how scientists often fail to document failures transparently. The novel shows Dr. Frankenstein's journal entries becoming sparse and non-transparent at the critical moment of reanimation, reflecting real-world tendencies to avoid documenting problems.
Explores the EU's push to eliminate animal testing and how AI-powered digital twins can maintain safety standards without animal sacrifice. Discusses the regulatory challenge of creating rules for new technologies without stifling innovation through excessive requirements.
Discusses research from 'Learned Optimism' showing how optimistic people overestimate control and overlook risks, while pessimistic people provide more accurate assessments. Argues both personality types are essential for balanced decision-making in different scenarios.
Advocates for careful regulatory expansion that prevents AI risks without shutting down opportunities. Emphasizes that scientists are already responsible actors, not Hollywood stereotypes, and that transparency programs aim to build public trust through information sharing.
Raises the unresolved question of how AI models will handle contradictory information when ingesting all human knowledge. Points out that human conflicts stem from incompatible beliefs, and questions whether AI can provide better decisions when trained on fundamentally contradictory worldviews.
Final summary emphasizing three critical strategies: integrating AI for genotyping/phenotyping workflows while retaining human breeders for breakthrough innovations, championing human gut instinct for bold R&D decisions, and treating AI as a copilot to check biases while prioritizing customer needs.
Transparency for AI Systems, Regulations, and Humans in Agricultural Manufacturing - with Kun He of Bayer
Ask me anything about this podcast episode...
Try asking: