How-to Guide for Preventing School Massacres


Observing lives lost and trauma from preventable tragedies is among the most frustrating experiences of my career.  However, whatever frustration we feel pales in comparison to the pain victims and their family members experience.

Prevention of human-caused catastrophes has long been a top priority of our R&D. We have a desire and an obligation to provide insight into what can be done to prevent school shootings and other similar human-caused catastrophes.  I sincerely hope this modest attempt to help will assist in taking specific pragmatic actions to save lives.

Step 1: Understanding Prevention

  1. Investing well in prevention provides the highest possible return on investment.
    • In every respect including human lives, trauma avoidance and financial ROI.
  2. The super majority of human-caused crises are now preventable.
    • The majority of human-caused catastrophes are preceded by multiple warnings in the modern information rich environment.
    • Effective prevention is primarily a combination of good science and engineering that tend to work best independent of political and financial conflicts.
    • Although state-of-the-art systems are the most effective for capturing preventions, low cost local efforts in manual form are better than what exists today in most communities.
  3. The main components in effective prevention include:
    • A functional system to identify early warning signs.
    • Ability to connect dots from various disparate sources.
    • Methodology to qualify risk factors with accuracy.
    • Integrated reporting to appropriate authorities.
    • Follow-up to confirm necessary action was taken.
    • Foolproof governance to prevent abuses and failures.
  4. The ability of social networks to assist is limited.
    • Large social networks have significant resources and strong data science teams.  Similar to the federal government they have the entire world to be concerned with, not just one community or school.  Social networks also have business models that often directly conflict with the needs of individuals and communities.  People use many different types of apps that come and go.
    • For these reasons communities need their own networked security systems that encourage participation and are free from external conflicts.  Community efforts need to coordinate with schools and law enforcement, which in turn integrate with federal agencies.
  5. The most effective prevention requires advanced systems engineering.

Since the Phoenix memo was revealed following the 9/11 terrorist attacks an enormous amount of time, intellectual and financial capital has been invested in prevention methods and systems.  The best systems are very good but expensive. Independent systems that have the capacity to effectively manage the complexity and scale of public interaction are not yet available in an affordable turnkey manner.[i]   However, low-cost methods do exist that all communities can and should initiate to significantly lower the risk of school shootings and similar tragedies.

Step 2:  Short-term Action Plan

  • Create a small high-level taskforce of operational managers from local organizations including law enforcement and IT experts.
  • Recruit a leader with appropriate operations experience and no conflicts (respected retired leader from business, military, police, fire department, etc.)
  • Identify a small consistent source of funds from the community (foundation, professional group, association, non-profit, or pass the hat).
  • Task leader to recruit and train volunteer security guards and a web development team (good experience for community college or university CS program).
  • Begin by building and testing a central web site with ability to send in secure tips with detailed Q&A in well-structured database.

Web site

  • Make site simple, brief, secure, transparent, clear and functional.
  • Confidentiality should be ensured and anonymous tips allowed.
  • Establish strict access & security protocols to prevent meddling or cover-ups.
  • Use the site to educate community and recruit volunteer guards.
  • Back-up and store data regularly in secure manner.
  • Establish integration with law enforcement for threat submissions.
  • Offer and publicize downloadable app for student phones.
  • The goal should be to reduce the probability of catastrophic events to the lowest level possible given the physical, financial and regulatory constraints.

Step 3: Long-term Plan

Unified artificial intelligence (AI) systems with human augmentation distributed over fully interoperable networks should be a high priority for the long-term.  Carefully designed networked systems are not only optimal for physical safety, but also health, learning, productivity and economic competitiveness.

AI systems are similar conceptually to Moore’s law that accurately forecast the number of transistors in a dense integrated circuit to double approximately every two years. Performing many functions in one system well can be much more efficient and easier to justify from a financial investment perspective.  Distributed AI systems offer the potential for a multiplier effect than can pay dividends for many years.

New Novel Financing Program for Prevention

One of the obstacles is that investment in effective prevention for low probability events for any particular location or entity are difficult to justify, particularly in tight budget environments.  We developed a novel new model last year called HumCat to help overcome the problem of financing.  The HumCat program (human caused catastrophes) bundles insurance coverage and bonding with unified AI systems, which can be extended to bonds to finance projects.

The economic goal is to demonstrate improved risk profiles for communities over time, which can then also improve ratings and reduce insurance and borrowing costs.  The intent for the HumCat program is to not only cover the cost of the system install and monitoring, but also pay for itself many times in the form of several types of captured preventions that saves lives and treasure.[ii]


  1. Mark Montgomery is the founder and CEO of Kyield, originator of the theorem ‘yield management of knowledge’, and inventor of the patented AI system that serves as the foundation for Kyield: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace’. He can be reached at
  1. [i] In the case of our Kyield OS the cost is in tens of millions to develop into turnkey hybrid cloud format in the highly specific architecture required to provide scalability in an independent manner for prevention with optimal governance, monitoring, and security. The system includes governance for the entire network, ability to continuously learn and improve automatically with a simple to use natural language interface, and downloadable apps for all individuals on any standard device.
  2. [ii] HumCat is an abbreviation for prevention of human-caused catastrophes roughly patterned after ‘nat cat’ (natural catastrophes): “A Million In Prevention Can Be Worth Billions of Cure with Distributed AI Systems”

An open letter to Fortune 500 CEOs on AI systems from Kyield’s founder


Since we offer an organizational and network operating system—technically defined as a modular artificial intelligence system, we usually deal with the most important strategic and operational issues facing organizations. This is most obvious in our new HumCat offering, which provides advanced technology for the prevention of human-caused catastrophes. Short of preventing an asteroid or comet collision with earth, this is among the most important work that is executable today. Please keep that in mind while reading.

In our efforts to assist organizations we perform an informal review on our process with the goal of improving upon the experience for all concerned. In cases where we invest a considerable amount of time, energy, and money, the review is more formal and extensive, including SWOT analysis, security checks and reviews, and deep scenario plans that can become extremely detailed down to the molecular level.

We are still a small company and I am the founder who performed the bulk of R&D, so by necessity I’m still involved in each case. Our current process has been refined over the last decade in many dozens of engagements with senior teams on strategic issues. In so doing we see patterns develop over time that we learn from and I share with senior executives when behavior causes them problems. This is still relatively new territory while we carefully craft the AI-assisted economy.

I just sent another such lessons learned to a CEO in a public company this morning. Usually this is done in a confidential manner to very few and never revealed otherwise, but I wanted to share a general pattern that is negatively impacting organizations in part due to the compounding effect it has on the broader economy. Essentially this can be reduced to misapplying the company’s playbook in dealing with advanced technology.

The playbook in this instance for lack of a better word can be described as ‘industry tech’, as in fintech or insurtech. While new to some in banking and insurance, this basic model has been applied for decades with limited, mixed results over time. The current version has switched the name incubators for accelerators, innovation shops now take the place of R&D and/or product development, and corporate VC is still good old corporate VC. Generally speaking this playbook can be a good thing for companies in industries like banking and insurance where the bulk of R&D came from a small group of companies that delivered very little innovation over decades, or worse as we saw in the financial crisis, which delivered toxic innovation. Eventually the lack of innovation can cause macro economic problems and place an entire industry at risk.

A highly refined AI system like ours is considered by many today to be among most important and valuable of any type, hence the fear however unjustified in our case. Anyone expecting to receive our ideas through open innovation on these issues is irresponsible and dangerous to themselves and others, including your company. That is the same as bank employees handing out money for free or insurance claims adjusters knowingly paying fraudulent claims at scale. Don’t treat the issue of intellectual capital and property lightly, including trade secrets, or it will damage your organization badly.

The most common mistake I see in practice is relying solely on ‘the innovation playbook’. CEOs especially should always be open to opportunity and on the lookout for threats, particularly any that have the capacity to make or save the company. Most of the critical issues companies face will not come from or fit within the innovation playbook. Accelerators, internal innovation shops and corporate VC arms are good tools when used appropriately, but if you rely solely on them you will almost certainly fail. None of the most successful CEOs that come to mind rely only on fixed playbooks.

These are a few of the more common specific suggestions I’ve made to CEOs of leading companies in dealing with AI systems, in part based on working with several of the most successful tech companies at similar stage, and in part in engaging with hundreds of other organizations from our perspective. As you can see I’ve learned to be a bit more blunt.

  1. Don’t attempt to force a system like Kyield into your innovation playbook. Attempting to do so will only increase risk for your company and do nothing at all for mine but waste time and money. Google succeeded in doing so with DeepMind, but it came at a high price and they needed to be flexible. Very few will be able to engage similarly, which is one reason why the Kyield OS is still available to customer organizations.

  2. With very few exceptions, primarily industry-specific R&D, we are far more experienced in AI systems than your team or consultants. With regard to the specific issues, functionality, and use cases within Kyield that we offer, no one has more experience and intel I am aware of. We simply haven’t shared it. Many are expert in specific algorithmics, but not distributed AI systems, which is what is needed.

  3. A few companies have now wasted billions of USD attempting to replicate the wheel that we will provide at a small fraction of that cost. A small number of CEOs have lost their jobs due to events that cost tens of billions I have good reason to believe our HumCat could have prevented. It therefore makes no sense at all not to adopt.

  4. The process must be treated with the respect and priority it deserves. Take it seriously. Lead it personally. Any such effort requires a team but can’t be entirely delegated.

  5. Our HumCat program requires training and certification in senior officers for good reasons. If it wasn’t necessary it wouldn’t be required.

  6. Don’t fear us, but don’t attempt to steal our work. Some of the best in the world have tried and failed. It’s not a good idea to try with us or any of our peers.

  7. Resist the temptation to customize universal issues that have already been refined. We call this the open checkbook to AI system hell. It has ended several CEO careers already and it’s early.

  8. Since these are the most important issues facing any organization, it’s really wise to let us help. Despite the broad characterization, not all of us are trying to kill your organization or put your people out of work. Quite the contrary in our case or we would have gone down a different path long ago.

  9. We are among the best allies to have in this war.

  10. It is war.

Mark Montgomery is the founder and CEO of Kyield, originator of the theorem ‘yield management of knowledge’, and inventor of the patented AI system that serves as the foundation for Kyield: ‘Modular System for Optimizing Knowledge Yield in the Digital Workplace’. He can be reached at