331-999-0071

Analytic Briefs, Published Research, Opinion

Every once in awhile we are able to publish our findings. This is few and far between just due to the nature of our contracts with clients. We do release some findings usually found on The Cyber Shafarat (www.cybershafarat.com). The information link on this page represent those documents.

All downloads of datasheets and briefs includes automatic agreement to the Treadstone 71 terms and EULA. https://www.cyberinteltrainingcenter.com/p/terms

Our customers benefit from 17 years of cyber intelligence experience combined with years of boots-on-the-ground collection and analysis. Enhance your cyber and threat intelligence program with Treadstone 71.

Narrative Control and Censorship in Russia

The information war in Russia is not just the blocking of websites and the persecution of journalists. The system is much more complicated and built in such a way that the Russian media receive information on the topic of Ukraine, the European Union, NATO, and the United States, only from pre-approved sources. And specialized government agencies are constantly monitoring whether an alternative point of view has appeared in the media space. Download the briefhere:

Narrative Control and Censorship in Russia

How the NATO PMESII is a subset of STEMPLES Plus

PSYOPS requires a thorough understanding of the target audience and their context, collected through various intelligence sources and analyzed using comprehensive frameworks like STEMPLES Plus and PMESII. This understanding underpins the design and implementation of PSYOPS campaigns and allows for an accurate assessment of their effectiveness.

Psychological Operations (PSYOPS) rely heavily on extensive research, intelligence, and information about potential target audiences. This information includes understanding the target audience's identity, location, vulnerabilities, susceptibilities, strengths, and weaknesses. PSYOPS also necessitates comprehensive knowledge about various contextual factors influencing the audience's attitudes and behaviors.

The Treadstone 71 STEMPLES Plus model provides a detailed framework for this kind of analysis. The acronym stands for Social, Technological, Economic, Military, Political, Legal, Educational, Security, plus Religion, Demographics, Infrastructure, Health, Psychological Makeup, and the Physical Environment. It is a comprehensive framework designed to understand the factors that might impact an audience's responses to PSYOPS.

"PMESII' stands for Political, Military, Economic, Social, Infrastructure, and Information. PMESII is another framework that aligns with STEMPLES Plus used by PSYOPS professionals to understand the context in which their target audience operates.

  • Political: Understanding the political dynamics, power structures, and influential actors.
  • Military: Grasping the military structures and capabilities, including alliances, rivalries, and power dynamics.
  • Economic: Comprehending the economic situation, such as income levels, employment rates, and primary industries.
  • Social: Evaluating social and cultural characteristics, traditions, beliefs, and value systems.
  • Infrastructure: Assessing physical infrastructure like roads, bridges, and buildings, as well as digital infrastructure.
  • Information: Understanding the communication landscape, including access to and use of media and information technologies.

The collection of this information should come from all available sources and agencies. The process is part of a broader intelligence requirements management plan, ensuring a holistic view that integrates all relevant aspects. This could involve a variety of intelligence sources, including human intelligence (HUMINT), signals intelligence (SIGINT), and open-source intelligence (OSINT).

Intelligence is also critical for determining the effectiveness of PSYOPS activities. By comparing pre- and post-operation data, one can gauge the impact of the PSYOPS campaign. Gauging the impact involves monitoring changes in the audience's attitudes, behaviors, or perceptions or noting shifts in the broader PMESII indicators.

Want to know more?

 

Albanian Attack on Ashraf-3 demonstrates items in our report - Read it here

Iranian Diplomatic and Political Pressure as a Result of Prisoner Swaps - Albanian Attack on Ashraf-3 demonstrates items in our report

An interesting observation: Iranian social media channels and associated regime accounts announced the attack before any Albanian or news outlets.

The former President and Prime Ministers of Albania hheld press conference condemning the attack and calling it baseless. The Albanian parliament formed an urgent committee to investigate. The interior minister called in by the committee for an urgent hearing. Learn more about Iranian psyops and cognitive warfare.

  • Negotiating Leverage: Iran holds foreign nationals in custody as a bargaining chip in negotiations. Iran swaps these individuals for their citizens held overseas or for other concessions, like lifting sanctions or providing financial or material resources, or removing the PMOI from their soil.
  • Domestic Approval: Iran frames their successful prisoner swaps as diplomatic victories, which boost the government's approval ratings at home. The swaps show that the government can protect its citizens abroad and secure their release when they are in trouble.
  • International Image: Releasing foreign prisoners improves Iran's international image, showing it as humane, fair, or willing to engage in diplomatic solutions. Releasing foreign prisoners assists their international relations and decreases hostility from other nations.
  • Direct Diplomatic Engagement: Iranian prisoner swaps create opportunities for direct engagement with Western countries. The swaps assist in an opening dialog when formal diplomatic channels do not exist. The swaps open doors for further negotiations on other matters.

Read the full report 

Cyber PSYOPs

Psychological operations, or PSYOP, are activities designed to influence individuals' or groups' behaviors, emotions, and attitudes. We see psyops used in marketing, public relations, politics, warfare, and therapeutic contexts. While ethical guidelines strongly discourage manipulation, understanding PSYOP can illuminate how messages influence audiences and promote understanding, empathy, and positive behavior change.

There are critical steps in the planning and execution of psychological operations:

  • Understand Your Audience: Before attempting to influence a target audience, it is crucial to understand them. Understanding your audience might involve researching their demographics, psychographics, culture, values, beliefs, attitudes, behaviors, and other factors that could impact their perceptions and actions.
  • Set Clear Objectives: What do you hope to achieve? Setting clear objectives might involve changing behaviors, shaping perceptions, or influencing attitudes. The more specific your goals, the easier it is to plan your operations and measure their success.
  • Develop a Strategy: Once you understand your audience and objectives, you can begin crafting a strategy. Developing a strategy involves choosing the right message, medium, and timing to influence your audience. You might consider employing principles of persuasion, social influence, and behavioral change.
  • Create and Distribute Content: Based on your strategy, you must create content that can influence your audience. Creating and distributing content may include speeches, social media posts, advertisements, articles, or any other form of communication. Once your content is ready, distribute it through channels that will reach your target audience.
  • Monitor and Adjust: After your operation begins, monitoring its progress is essential. Monitoring and adjusting your operation involves tracking metrics like engagement rates, attitude changes, or behavioral outcomes. If your operation is not achieving its objectives, you may need to adjust your strategy, content, or distribution methods.

Read the Brief

Iranian Cyber and Physical Acts Against Any Opposition

Iran's Freedom March 1 July 2023 Paris

From Cyber Grey Zone Actions to Assassinations – PMOI in the Crosshairs.

The following is an overview of Iranian regime tactics, techniques, and methods used against dissidents and opposition groups. The People's Mojahedin Organization of Iran (PMOI) holds a Free Iran conference every summer. Every year, the Iranian regime works to discredit, disrupt, delay, and destroy any attempts at the PMOI to hold the conference. From physical threats to the hacking of foreign governments to political pressure because of prisoner exchanges, Iran uses any tactic available to push the envelope during each action. Iran continues these actions.

Cyber grey zone actions blur the line between acceptable state behavior and hostile acts, creating challenges for attribution, response, and establishing clear norms and rules in the cyber domain. Addressing these challenges requires international cooperation, robust cybersecurity measures, and the development of norms and agreements to regulate state behavior in cyberspace.

Iranian cyber grey zone activities refer to malicious actions in cyberspace that fall short of a full-fledged cyberattack but aim to achieve strategic objectives.

  • Espionage: Iran conducts cyber espionage campaigns targeting foreign governments, organizations, and individuals. These activities involve stealing sensitive information, such as political or military intelligence, intellectual property, or personal data.
  • Disinformation and Influence Operations: Iran engages in online disinformation campaigns, spreading misleading information or propaganda to shape public opinion and advance its political or ideological agenda.
  • DDoS Attacks: Distributed Denial of Service (DDoS) attacks involve overwhelming a target's servers or networks with a flood of traffic, rendering them inaccessible. Iran conducted DDoS attacks against various targets, including websites of foreign governments, media organizations, and financial institutions.
  • Hacking and Defacement: Iranian hacking groups have conducted cyber intrusions and website defacements to highlight their capabilities, make political statements, or retaliate against perceived adversaries. These activities often target government websites, news outlets, or organizations critical of Iranian policies.
  • Cyber Attacks on Critical Infrastructure: While not explicitly falling into the grey zone, Iran conducts cyberattacks on critical infrastructure, such as energy facilities, banks, and transportation systems. Notable examples include the 2012 attack on Saudi Aramco and the 2019 attack on the oil tanker industry.

Iranian Cog War activities

Social Media Manipulation: Iranian actors operate fake social media accounts and engage in disinformation campaigns to influence public opinion, particularly during sensitive periods like elections or geopolitical tensions.

Cyber Espionage: Iran executed various cyber espionage campaigns targeting governments, organizations, and individuals worldwide. These activities involve stealing sensitive information for intelligence purposes or as a method to gain a competitive advantage.

Website Defacements: Iranian hacker groups have conducted website defacements, replacing the content of targeted websites with their own messages or political statements. Iran uses defacements to highlight capabilities, raise awareness, or promote political ideologies.

Phishing and Spear-Phishing: Iranian actors execute phishing campaigns that use deceptive emails or messages to trick individuals into revealing sensitive information, such as login credentials or financial data.

Influence Operations: Iran engages in influence operations through various means, including spreading propaganda, manipulating narratives, and leveraging state-controlled media outlets to shape public opinion, both domestically and abroad.

Targeting Dissidents and Activists: Iranian cyber actors target dissidents, activists, and human rights organizations, both within Iran and abroad. These activities aim to disrupt or silence opposition voices.

Distributed Denial of Service (DDoS) Attacks: Iran conducts DDoS attacks targeting various websites and online services. These attacks overwhelm the targeted systems, rendering them inaccessible to legitimate users.

Data Theft and Intellectual Property Theft: Iranian cyber actors steal sensitive data, including intellectual property, from foreign companies, universities, and research institutions.

Ransomware Attacks: While not exclusively attributed to Iran, there have been instances where Iranian-linked groups deployed ransomware to extort money from organizations by encrypting their systems and demanding payment for their release.

READ THE FULL REPORT

Automating Evidence Using the Admiralty Scoring Model and CRAAP Test Integration

Automating all levels of the Admiralty Scoring Model in assessing cyber evidence involves developing a systematic process incorporating the model's criteria and scoring methodology. We listed possible steps to automate each level of the Admiralty Scoring Model.

  1. Collect and preprocess the cyber evidence: Gather the relevant cyber evidence, such as log files, network traffic data, system artifacts, or any other digital information related to the incident or investigation. Preprocess the data to ensure consistency and compatibility for analysis, which may include data cleaning, normalization, and formatting.
  2. Define the criteria for each level: Review the Admiralty Scoring Model and identify the criteria for each level. The model typically consists of several levels, such as Level 1 (Indication), Level 2 (Reasonable Belief), Level 3 (Strong Belief), and Level 4 (Fact). Define the specific criteria and indicators for assessment at each level based on the model's guidance.
  3. Develop algorithms or rules for evidence assessment: Design algorithms or rules that can automatically evaluate the evidence against the defined criteria for each level. This can involve applying machine learning techniques, natural language processing, or rule-based systems to analyze the evidence and make assessments based on the criteria.

  1. Extract features from the evidence: Identify the relevant features or attributes from the evidence that can contribute to the assessment process. These features may include indicators of compromise, timestamps, network patterns, file characteristics, or any other relevant information that aligns with the criteria for each level.
  2. Assign scores based on the criteria: Assign scores or ratings to the evidence based on the criteria for each level of the Admiralty Scoring Model. The scoring can be binary (e.g., pass/fail), numerical (e.g., on a scale of 1 to 10), or any other appropriate scale that reflects the level of confidence or belief associated with the evidence.
  3. Integrate the scoring process into a unified system: Develop a unified system or application incorporating the automated scoring process. This system should take the evidence as input, apply algorithms or rules to assess the evidence, and generate the corresponding scores or ratings for each model level.
  4. Validate and refine the automated scoring system: Validate the performance of the automated scoring system by comparing its results against human assessments or established benchmarks. Analyze the system's accuracy, precision, recall, or other relevant metrics to ensure its reliability. Refine the system as needed based on the evaluation results.
  5. Continuously update and improve the system: Stay updated with the latest cyber threat intelligence, attack techniques, and new evidentiary factors. Regularly update and improve the automated scoring system to adapt to emerging trends, refine the criteria, and enhance the accuracy of the assessments.

Automating the Admiralty Scoring Model in assessing cyber evidence requires expertise in cybersecurity, data analysis, and software development. Involve domain experts, cybersecurity analysts, and data scientists to ensure effective implementation and alignment with your organization's specific requirements or use case.

Integrating the CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) with the NATO Admiralty Scoring Model can provide a comprehensive assessment framework for evaluating the credibility and quality of cyber evidence.

  1. Define the criteria: Combine the criteria from both models to create a unified set of evaluation criteria. Use the complete NATO Admiralty Scoring Model criteria as the main assessment levels, while the CRAAP test can serve as sub-criteria within each level. For example:
    • Level 1 (Indication): Assess the evidence for Currency, Relevance, and Authority.
    • Level 2 (Reasonable Belief): Evaluate the evidence for Accuracy and Purpose.
    • Level 3 (Strong Belief): Analyze the evidence for all criteria of the CRAAP test.
    • Level 4 (Fact): Further verify the evidence for all criteria of the CRAAP test.
  2. Assign weights or scores: Determine each criterion's relative importance or weight within the unified assessment framework. You can assign higher weights to the criteria from the NATO Admiralty Scoring Model since they represent the main levels, while the CRAAP test criteria can have lower weights as sub-criteria. Alternatively, you can assign scores or ratings to each criterion based on their relevance and impact on the overall assessment.
  3. Develop an automated assessment process: Design algorithms or rules based on the defined criteria and weights to automate the assessment process. This can involve natural language processing techniques, text analysis, or other methods to extract relevant information and evaluate the evidence against the criteria.
  4. Extract relevant evidence features: Identify the features or attributes of the evidence that align with the CRAAP test criteria and the NATO Admiralty Scoring Model. For example, for Authority, you may consider factors such as author credentials, source reputation, or peer review status. Extract these features from the evidence used in the automated assessment process.
  5. Apply the unified assessment framework: Integrate the automated assessment process with the unified framework. Input the evidence, apply the algorithms or rules to evaluate the evidence against the defined criteria, and generate scores or ratings for each criterion and overall assessment level.
  6. Aggregate and interpret the results: Aggregate the scores or ratings from each criterion and level to obtain an overall assessment of the evidence. Establish thresholds or decision rules to determine the final classification of the evidence based on the combined scores or ratings. Interpret the results to communicate the credibility and quality of the evidence to stakeholders.
  7. Validate and refine the integrated framework: Validate the performance of the integrated framework by comparing its results with manual assessments or established benchmarks. Assess the accuracy, precision, recall, or other relevant metrics to ensure its effectiveness. Continuously refine and improve the framework based on feedback and new insights.

By integrating the CRAAP test with the NATO Admiralty Scoring Model, you can enhance the assessment process, considering the technical aspects of the evidence and its currency, relevance, authority, accuracy, and purpose. This integration provides a more comprehensive and well-rounded evaluation of the evidence's credibility and quality.

 in assessing cyber evidence involves developing a systematic process incorporating the model's criteria and scoring methodology. We listed possible steps to automate each level of the Admiralty Scoring Model.

  1. Collect and preprocess the cyber evidence: Gather the relevant cyber evidence, such as log files, network traffic data, system artifacts, or any other digital information related to the incident or investigation. Preprocess the data to ensure consistency and compatibility for analysis, which may include data cleaning, normalization, and formatting.
  2. Define the criteria for each level: Review the Admiralty Scoring Model and identify the criteria for each level. The model typically consists of several levels, such as Level 1 (Indication), Level 2 (Reasonable Belief), Level 3 (Strong Belief), and Level 4 (Fact). Define the specific criteria and indicators for assessment at each level based on the model's guidance.
  3. Develop algorithms or rules for evidence assessment: Design algorithms or rules that can automatically evaluate the evidence against the defined criteria for each level. This can involve applying machine learning techniques, natural language processing, or rule-based systems to analyze the evidence and make assessments based on the criteria.
  4. Extract features from the evidence: Identify the relevant features or attributes from the evidence that can contribute to the assessment process. These features may include indicators of compromise, timestamps, network patterns, file characteristics, or any other relevant information that aligns with the criteria for each level.
  5. Assign scores based on the criteria: Assign scores or ratings to the evidence based on the criteria for each level of the Admiralty Scoring Model. The scoring can be binary (e.g., pass/fail), numerical (e.g., on a scale of 1 to 10), or any other appropriate scale that reflects the level of confidence or belief associated with the evidence.
  6. Integrate the scoring process into a unified system: Develop a unified system or application incorporating the automated scoring process. This system should take the evidence as input, apply algorithms or rules to assess the evidence, and generate the corresponding scores or ratings for each model level.
  7. Validate and refine the automated scoring system: Validate the performance of the automated scoring system by comparing its results against human assessments or established benchmarks. Analyze the system's accuracy, precision, recall, or other relevant metrics to ensure its reliability. Refine the system as needed based on the evaluation results.
  8. Continuously update and improve the system: Stay updated with the latest cyber threat intelligence, attack techniques, and new evidentiary factors. Regularly update and improve the automated scoring system to adapt to emerging trends, refine the criteria, and enhance the accuracy of the assessments.

Automating the Admiralty Scoring Model in assessing cyber evidence requires expertise in cybersecurity, data analysis, and software development. Involve domain experts, cybersecurity analysts, and data scientists to ensure effective implementation and alignment with your organization's specific requirements or use case.

Integrating the CRAAP test (Currency, Relevance, Authority, Accuracy, Purpose) with the NATO Admiralty Scoring Model can provide a comprehensive assessment framework for evaluating the credibility and quality of cyber evidence.

  1. Define the criteria: Combine the criteria from both models to create a unified set of evaluation criteria. Use the complete NATO Admiralty Scoring Model criteria as the main assessment levels, while the CRAAP test can serve as sub-criteria within each level. For example:
    • Level 1 (Indication): Assess the evidence for Currency, Relevance, and Authority.
    • Level 2 (Reasonable Belief): Evaluate the evidence for Accuracy and Purpose.
    • Level 3 (Strong Belief): Analyze the evidence for all criteria of the CRAAP test.
    • Level 4 (Fact): Further verify the evidence for all criteria of the CRAAP test.
  2. Assign weights or scores: Determine each criterion's relative importance or weight within the unified assessment framework. You can assign higher weights to the criteria from the NATO Admiralty Scoring Model since they represent the main levels, while the CRAAP test criteria can have lower weights as sub-criteria. Alternatively, you can assign scores or ratings to each criterion based on their relevance and impact on the overall assessment.
  3. Develop an automated assessment process: Design algorithms or rules based on the defined criteria and weights to automate the assessment process. This can involve natural language processing techniques, text analysis, or other methods to extract relevant information and evaluate the evidence against the criteria.
  4. Extract relevant evidence features: Identify the features or attributes of the evidence that align with the CRAAP test criteria and the NATO Admiralty Scoring Model. For example, for Authority, you may consider factors such as author credentials, source reputation, or peer review status. Extract these features from the evidence used in the automated assessment process.
  5. Apply the unified assessment framework: Integrate the automated assessment process with the unified framework. Input the evidence, apply the algorithms or rules to evaluate the evidence against the defined criteria, and generate scores or ratings for each criterion and overall assessment level.
  6. Aggregate and interpret the results: Aggregate the scores or ratings from each criterion and level to obtain an overall assessment of the evidence. Establish thresholds or decision rules to determine the final classification of the evidence based on the combined scores or ratings. Interpret the results to communicate the credibility and quality of the evidence to stakeholders.
  7. Validate and refine the integrated framework: Validate the performance of the integrated framework by comparing its results with manual assessments or established benchmarks. Assess the accuracy, precision, recall, or other relevant metrics to ensure its effectiveness. Continuously refine and improve the framework based on feedback and new insights.

By integrating the CRAAP test with the NATO Admiralty Scoring Model, you can enhance the assessment process, considering the technical aspects of the evidence and its currency, relevance, authority, accuracy, and purpose. This integration provides a more comprehensive and well-rounded evaluation of the evidence's credibility and quality.

Copyright 2023 Treadstone 71

Automating source credibility, reliability, and accuracy

Verifying intelligence sources' credibility, reliability, and accuracy often requires a combination of manual analysis and critical thinking. However, we can employ algorithms and techniques to support this process:

  1. Textual Analysis: Textual analysis algorithms can help assess the credibility and reliability of written sources. Apply Natural Language Processing (NLP) techniques, such as sentiment analysis, named entity recognition, and topic modeling, to analyze the language, sentiment, entities mentioned, and consistency of information within the text. This can provide insights into the credibility and reliability of the source.
  2. Social Network Analysis: Use social network analysis algorithms to examine the connections and relationships among individuals or organizations involved in intelligence sources. By mapping the network and analyzing its structure, centrality measures, and patterns of interactions, you can identify potential biases, affiliations, or credibility indicators.

  1. Data Fusion: Data fusion algorithms combine information from multiple sources to identify patterns, overlaps, or discrepancies. By comparing data from diverse sources and applying algorithms such as clustering, similarity analysis, or anomaly detection, you can assess the consistency and accuracy of the information provided by various sources.
  2. Reputation Analysis: Reputation analysis algorithms evaluate sources’ reputation and histories based on historical data and user feedback. These algorithms consider factors such as the credibility of previous reports, the expertise or authority of the source, and the level of trust assigned by other users or systems. Reputation analysis can help gauge the reliability and accuracy of intelligence sources.
  3. Bayesian Analysis: Bayesian analysis techniques can be employed to update a source’s accuracy probability based on new evidence or information. Bayesian algorithms use prior probabilities and update them with new data to estimate the likelihood of a source being accurate or reliable. By iteratively updating the probabilities, you can refine the assessment of the sources over time.
  4. Machine Learning-based Classification: Train machine learning algorithms, such as supervised classification models, to categorize sources based on their credibility or accuracy. By providing labeled training data (e.g., credible vs. non-credible sources), these algorithms can learn patterns and features that distinguish reliable sources from less reliable ones. This can assist in automatically classifying and assessing the credibility of intelligence sources.

While algorithms can support the verification process, human judgment, and critical thinking remain crucial. Use algorithms to augment and assist human analysts in assessing source credibility, reliability, and accuracy. Combining automated techniques and human expertise is necessary to ensure a comprehensive and robust evaluation of intelligence sources.

Specific algorithms we commonly we in the context of verifying the credibility, reliability, and accuracy of intelligence sources:

  1. Naive Bayes Classifier: Naive Bayes is a supervised machine learning algorithm that calculates the probability of a source as reliable or accurate based on features extracted from the source's content or metadata. It assumes independence among the features and uses Bayes' theorem to make predictions. Train Naive Bayes on labeled data to classify sources as credible or non-credible.
  2. Support Vector Machines (SVM): SVM is a supervised learning algorithm used for classification tasks. (“11 Most Common Machine Learning Algorithms Explained in a Nutshell”) It works by finding an optimal hyperplane that separates different classes. (“Unlocking Profit Potential: Applying Machine Learning to Algorithmic ...”) Train SVM on labeled data, where sources are classified as reliable or unreliable. Once trained, it can classify new sources based on their features, such as language patterns, linguistic cues, or metadata.
  3. Random Forest: Random Forest is an ensemble learning algorithm that combines multiple decision trees to make predictions. (“BamboTims/Bulldozer-Price-Regression-ML-Model - GitHub”) We can train Random Forest on labeled data based on various features to classify sources as credible or not. Random Forest can manage complex relationships between features and provide insights into the importance of varied factors for source credibility.
  4. PageRank Algorithm: Originally developed for ranking web pages, the PageRank algorithm can be adapted to assess the credibility and importance of intelligence sources. PageRank evaluates sources' connectivity and link structure to determine their reputation and influence within a network. Sources with high PageRank scores are considered reliable and credible.
  5. TrustRank Algorithm: TrustRank is an algorithm that measures the trustworthiness of sources based on their relationships with trusted seed sources. It assesses the quality and reliability of the links pointing to a source and propagates trust scores accordingly. Use TrustRank to identify trustworthy sources and filter out potentially unreliable ones.
  6. Sentiment Analysis: Sentiment analysis algorithms use NLP techniques to analyze the sentiment or opinion expressed in source texts. These algorithms can identify biases, subjectivity, or potential inaccuracies in the information presented by assessing the sentiment, attitudes, and emotions conveyed. Sentiment analysis can be useful in evaluating the tone and reliability of intelligence sources.
  7. Network Analysis: Apply network analysis algorithms, such as centrality measures (e.g., degree centrality, betweenness centrality) or community detection algorithms, to analyze the connections and relationships among sources. These algorithms help identify influential or central sources within a network, assess the reliability of sources based on their network position, and detect potential biases or cliques.

The choice of algorithms depends on the specific context, available data, and objectives of the analysis. Additionally, train and fine-tune these algorithms using relevant training data to align with the requirements for verifying intelligence sources.

Copyright 2023 Treadstone 71 

Speeding the intelligence analysis peer review process through process automation

Intelligence analysis automated peer review processes can be valuable in validating intelligence reports. With the advent of artificial intelligence and natural language processing, viability is not far off.

  1. Design an automated peer review framework: Develop a framework incorporating automated peer review processes into your intelligence analysis system. Define the specific assessment criteria and guidelines for the review, such as accuracy, relevance, clarity, coherence, and adherence to intelligence community standards.
  2. Identify qualified reviewers: Identify a pool of qualified reviewers within your organization or intelligence community who possess the necessary expertise and knowledge in the subject matter. Consider their experience, domain expertise, and familiarity with the intelligence analysis process.

  • Define review criteria and metrics: Establish specific criteria and metrics for evaluation against which the intelligence reports. These can include factors such as the quality and accuracy of sources, logical reasoning, use of SATs, coherence of analysis, and adherence to intelligence community standards. Define quantitative or qualitative metrics for application during the review process.
  • Implement automated review tools: Leverage automated review tools or platforms that can facilitate the review process. These tools can include text analysis algorithms, natural language processing (NLP) techniques, and machine learning models designed to assess and evaluate the quality and characteristics of the reports. Such tools can assist in identifying potential errors, inconsistencies, or gaps in the analysis.
  • Peer review assignment and scheduling: Develop a mechanism for assigning intelligence reports to peer reviewers based on their expertise and workload. Implement a scheduling system that ensures timely and efficient review cycles, considering the required turnaround time for each report.
  • Reviewer feedback and ratings: Enable the reviewers to provide feedback, comments, and ratings on the reports they review. Develop a standardized template or form that guides the reviewers in capturing their observations, suggestions, and any necessary corrections. Consider incorporating a rating system that quantifies the quality and relevance of the reports.
  • Aggregate and analyze reviewer feedback: Analyze the feedback and ratings provided by the reviewers to identify common patterns, areas of improvement, or potential issues in the reports. Utilize data analytics techniques to gain insights from the aggregated reviewer feedback, such as identifying recurring strengths or weaknesses in the analysis.
  • Iterative improvement process: Incorporate the feedback received from the automated peer review process into an iterative improvement cycle. Use the insights gained from the review to refine the analysis methodologies, address identified weaknesses, and enhance the overall quality of the intelligence reports.
  • Monitor and track review performance: Continuously monitor and track the performance of the automated peer review processes. Analyze metrics such as review completion time, agreement levels among reviewers, and reviewer performance to identify opportunities for process optimization and ensure the review system's effectiveness and efficiency.
  • Provide feedback and guidance to analysts: Use the reviewer feedback to provide guidance and support to analysts. Share the review results with analysts, highlighting areas for improvement and providing recommendations for enhancing their analysis skills. Encourage a feedback loop between reviewers and analysts to foster a culture of continuous learning and improvement.

By integrating automated peer review processes into your intelligence analysis workflow, you can validate and enhance the quality of intelligence reports. This approach promotes collaboration, objectivity, and adherence to standards within your internal organization and external intelligence-sharing structures, ultimately improving the accuracy and reliability of the analysis.

Copyright 2023 Treadstone 71

Integrating and automating Structured Analytic Techniques (SATs)

Treadstone 71 uses Sats as a standard part of the intelligence lifecycle. Integrating and automating Structured Analytic Techniques (SATs) involves using technology and computational tools to streamline the application of these techniques. We have models that do just that following the steps and methods.

  1. Standardize SAT Frameworks: Develop standardized frameworks for applying SATs. This includes defining the various SAT techniques, their purpose, and the steps involved in each technique. Create templates or guidelines that analysts can follow when using SATs.
  2. Develop SAT Software Tools: Design and develop software tools specifically tailored for SATs. These tools can provide automated support for executing SAT techniques, such as entity relationship analysis, link analysis, timeline analysis, and hypothesis generation. The tools can automate repetitive tasks, enhance data visualization, and assist in pattern recognition.
  3. Natural Language Processing (NLP): Utilize NLP techniques to automate the extraction and analysis of unstructured text data. NLP algorithms can process large volumes of textual information, identify key entities, relationships, and sentiments, and convert them into structured data for further SAT analysis.

  1. Data Integration and Fusion: Integrate diverse data sources and apply data fusion techniques to combine structured and unstructured data. Automated data integration allows for a holistic analysis using SATs by providing a comprehensive view of the available information.
  2. Machine Learning and AI: Leverage machine learning and AI algorithms to automate certain aspects of SATs. For example, training machine learning models to identify patterns, anomalies, or trends in data, assisting analysts in generating hypotheses or identifying areas of interest. AI techniques can automate repetitive tasks and provide recommendations based on historical patterns and trends.
  3. Visualization Tools: Implement data visualization tools to present complex data visually intuitively. Interactive dashboards, network graphs, and heat maps can help analysts explore and understand relationships, dependencies, and patterns identified through SATs. Automated visualization tools facilitate quick and comprehensive analysis.
  4. Workflow Automation: Automate the workflow of applying SATs by developing systems or platforms that guide analysts through the process. These systems can provide step-by-step instructions, automate data preprocessing tasks, and integrate various analysis techniques seamlessly.
  5. Collaboration and Knowledge Sharing Platforms: Implement collaborative platforms where analysts can share and discuss the application of SATs. These platforms can facilitate knowledge sharing, provide access to shared datasets, and allow for collective analysis, leveraging the expertise of multiple analysts.
  6. Continuous Improvement: Continuously evaluate and refine the automated SAT processes. Incorporate feedback from analysts, monitor the effectiveness of the automated tools, and make enhancements to improve their performance and usability. Stay updated with advancements in technology and analytic methodologies to ensure the automation aligns with the evolving needs of the analysis process.
  7. Training and Skill Development: Provide training and support to analysts in using the automated SAT tools effectively. Offer guidance on interpreting automated results, understanding limitations, and leveraging automation to enhance their analytic capabilities.

By implementing these methods, you can integrate and automate SATs, enhancing the efficiency and effectiveness of the analysis process. Combining technology, data integration, machine learning, and collaborative platforms empowers analysts to apply SATs more comprehensively and consistently, ultimately leading to more informed and valuable insights. Commonly used SATs include the following:

  1. Analysis of Competing Hypotheses (ACH): A technique that systematically evaluates multiple hypotheses and their supporting and contradicting evidence to determine the most plausible explanation.
  2. Key Assumptions Check (KAC): This involves identifying and evaluating the key assumptions underlying an analysis to assess their validity, reliability, and potential impact on the conclusions.
  3. Indicators and Warning Analysis (IWA): Focuses on identifying and monitoring indicators that suggest potential threats or significant developments, enabling timely warning and proactive measures.
  4. Alternative Futures Analysis (AFA): Examines and analyzes various likely future scenarios to anticipate and prepare for different outcomes.
  5. Red Team Analysis: Involves the creation of a separate team or group that challenges the assumptions, analysis, and conclusions of the main analysis, providing alternative perspectives and critical analysis.
  6. Decision Support Analysis (DSA): Provides structured methods and techniques to aid decision-makers in evaluating options, weighing risks and benefits, and selecting the most suitable course of action.
  7. Link Analysis: Analyzes and visualizes relationships and connections between entities, such as individuals, organizations, or events, to understand networks, patterns, and dependencies.
  8. Timeline Analysis: Constructs a chronological sequence of events to identify patterns, trends, or anomalies over time and aid in understanding causality and impact.
  9. SWOT Analysis: Evaluates the strengths, weaknesses, opportunities, and threats associated with a particular subject, such as an organization, project, or policy, to inform strategic decision-making.
  10. Structured Brainstorming: Facilitates a structured approach to generating ideas, insights, and potential solutions by leveraging a group’s collective intelligence.
  11. Delphi Method: Involves gathering input from a panel of experts through a series of questionnaires or iterative surveys, aiming to achieve consensus or identify patterns and trends.
  12. Cognitive Bias Mitigation: Focuses on recognizing and addressing cognitive biases that may influence analysis, decision-making, and perception of information.
  13. Hypothesis Development: Involves formulating testable hypotheses based on available information, expertise, and logical reasoning to guide the analysis and investigation.
  14. Influence Diagrams: Graphical representation of causal relationships, dependencies, and influences among factors and variables to understand complex systems and their interdependencies.
  15. Structured Argumentation: Involves constructing logical arguments with premises, evidence, and conclusions to support or refute a particular proposition or hypothesis.
  16. Pattern Analysis: Identifies and analyzes recurring patterns in data or events to uncover insights, relationships, and trends.
  17. Bayesian Analysis: Applies Bayesian probability theory to update and refine beliefs and hypotheses based on new evidence and prior probabilities.
  18. Impact Analysis: Assesses the potential consequences and implications of factors, events, or decisions to understand their potential effects.
  19. Comparative Analysis: Compares and contrasts different entities, options, or scenarios to evaluate their relative strengths, weaknesses, advantages, and disadvantages.
  20. Structured Analytic Decision Making (SADM): Provides a framework for structured decision-making processes, incorporating SATs to enhance analysis, evaluation, and decision-making.

These techniques offer structured frameworks and methodologies to guide the analysis process, improve objectivity, and enhance the quality of insights and decision-making. Depending on the specific analysis requirements, analysts can select and apply the most appropriate SATs.

Analysis of Competing Hypotheses (ACH):

  • Develop a module that allows analysts to input hypotheses and supporting/contradicting evidence.
  • Apply Bayesian reasoning algorithms to evaluate the likelihood of each hypothesis based on the evidence provided.
  • Present the results in a user-friendly interface, ranking the hypotheses by their probability of being true.

Key Assumptions Check (KAC):

  • Provide a framework for analysts to identify and document key assumptions.
  • Implement algorithms to evaluate the validity and impact of each assumption.
  • Generate visualizations or reports that highlight critical assumptions and their potential effects on the analysis.

Indicators and Warning Analysis (IWA):

  • Develop a data ingestion pipeline to collect and process relevant indicators from various sources.
  • Apply anomaly detection algorithms to identify potential warning signs or indicators of emerging threats.
  • Implement real-time monitoring and alerting mechanisms to notify analysts of significant changes or potential risks.

Alternative Futures Analysis (AFA):

  • Design a scenario generation module that allows analysts to define different future scenarios.
  • Develop algorithms to simulate and evaluate the outcomes of each scenario based on available data and assumptions.
  • Present the results through visualizations, highlighting the implications and potential risks associated with each future scenario.

Red Team Analysis:

  • Enable collaboration features that facilitate the formation of a red team and integration with the AI application.
  • Provide tools for the red team to challenge assumptions, critique the analysis, and provide alternative perspectives.
  • Incorporate a feedback mechanism that captures the red team's input and incorporates it into the analysis process.

Decision Support Analysis (DSA):

  • Develop a decision framework that guides analysts through a structured decision-making process.
  • Incorporate SATs such as SWOT analysis, comparative analysis, and cognitive bias mitigation techniques within the decision framework.
  • Provide recommendations based on the analysis results to support informed decision-making.

Link Analysis:

  • Implement algorithms to identify and analyze relationships between entities.
  • Visualize the network of relationships using graph visualization techniques.
  • Enable interactive exploration of the network, allowing analysts to drill down into specific connections and extract insights.

Timeline Analysis:

  • Develop a module to construct timelines based on event data.
  • Apply algorithms to identify patterns, trends, and anomalies within the timeline.
  • Enable interactive visualization and exploration of the timeline, allowing analysts to investigate causal relationships and assess the impact of events.

SWOT Analysis:

  • Provide a framework for analysts to conduct SWOT analysis within the AI application.
  • Develop algorithms to automatically analyze strengths, weaknesses, opportunities, and threats based on relevant data.
  • Present the SWOT analysis results in a clear and structured format, highlighting key insights and recommendations.

Structured Brainstorming:

  • Integrate collaborative features that allow analysts to participate in structured brainstorming sessions.
  • Provide prompts and guidelines to facilitate the generation of ideas and insights.
  • Capture and organize the results of the brainstorming sessions for further analysis and evaluation.Top of Form

Delphi Method:

  • Develop a module that facilitates iterative surveys or questionnaires to collect input from a panel of experts.
  • Apply statistical analysis techniques to aggregate and synthesize the expert opinions.
  • Provide a visualization of the consensus or patterns emerging from the Delphi process.

Cognitive Bias Mitigation:

  • Implement a module that raises awareness of common cognitive biases and provides guidance on mitigating them.
  • Integrate reminders and prompts within the AI application to prompt analysts to consider biases during the analysis process.
  • Offer checklists or decision support tools that help identify and address biases in the analysis.

Hypothesis Development:

  • Provide a module that assists analysts in formulating testable hypotheses based on available information.
  • Offer guidance on structuring hypotheses and identifying the evidence needed for evaluation.
  • Enable the AI application to analyze the supporting evidence and provide feedback on the strength of the hypotheses.

Influence Diagrams:

  • Develop a visualization tool that allows analysts to create influence diagrams.
  • Enable the AI application to analyze the relationships and dependencies within the diagram.
  • Provide insights on the potential impacts of factors and how they affect the overall system.

Pattern Analysis:

  • Implement algorithms that automatically detect and analyze patterns in the data.
  • Apply machine learning techniques like clustering or anomaly detection to identify significant patterns.
  • Visualize and summarize the identified patterns to aid analysts in deriving insights and making informed conclusions.

Bayesian Analysis:

  • Develop a module that applies Bayesian probability theory to update beliefs and hypotheses based on new evidence.
  • Provide algorithms that calculate posterior probabilities based on prior probabilities and observed data.
  • Present the results in a way that allows analysts to understand the impact of new evidence on the analysis.

Impact Analysis:

  • Incorporate algorithms that assess the potential consequences and implications of factors or events.
  • Enable the AI application to simulate and evaluate the impacts of various scenarios.
  • Provide visualizations or reports highlighting potential effects on different entities, systems, or environments.

Comparative Analysis:

  • Develop tools that enable analysts to compare and evaluate multiple entities, options, or scenarios.
  • Implement algorithms that calculate and present comparative metrics, such as scores, rankings, or ratings.
  • Provide visualizations or reports that facilitate a comprehensive and structured comparison.

Structured Analytic Decision Making (SADM):

  • Integrate the various SATs into a decision-support framework that guides analysts through the analysis process.
  • Provide step-by-step guidance, prompts, and templates for applying different SATs in a structured manner.
  • Enable the AI application to capture and organize the analysis outputs within the SADM framework for traceability and consistency.

Although not all-inclusive, the above list is a good starting point to integrating and automating structured analytic techniques.

By including these additional SATs in the AI application, analysts can leverage comprehensive techniques to support their analysis. We tailor each technique within an application to automate repetitive tasks, facilitate data analysis, provide visualizations, and offer decision support, leading to more efficient and effective analysis processes.

Structured Analytic Techniques (SATs) Integration:

  • Develop a module that allows analysts to integrate and combine multiple SATs seamlessly.
  • Provide a flexible framework that enables analysts to apply combined SATs based on the specific analysis requirements.
  • Ensure that the AI application supports the interoperability and interplay of different SATs to enhance the analysis process.

Sensitivity Analysis:

  • Implement algorithms that assess the sensitivity of analysis results to changes in assumptions, variables, or parameters.
  • Allow analysts to explore different scenarios and evaluate how sensitive the analysis outcomes are to various inputs.
  • Provide visualizations or reports that depict the sensitivity of the analysis and its potential impact on decision-making.

Data Fusion and Integration:

  • Develop mechanisms to integrate and fuse data from multiple sources, formats, and modalities.
  • Apply data integration techniques to enhance the completeness and accuracy of the analysis data.
  • Implement algorithms for resolving conflicts, overseeing missing data, and harmonizing diverse datasets.

Expert Systems and Knowledge Management:

  • Incorporate expert systems that capture and utilize the knowledge and expertise of domain specialists.
  • Develop a knowledge management system that enables the organization and retrieval of relevant information, insights, and lessons learned.
  • Leverage AI techniques, such as natural language processing and knowledge graphs, to facilitate knowledge discovery and retrieval.

Scenario Planning and Analysis:

  • Design a module that supports scenario planning and analysis.
  • Enable analysts to define and explore different plausible scenarios, considering a range of factors, assumptions, and uncertainties.
  • Apply SATs within the context of scenario planning, such as hypothesis development, impact analysis, and decision support, to evaluate and compare the outcomes of each scenario.

Calibration and Validation:

  • Develop methods to calibrate and validate AI models’ performance in the analysis process.
  • Implement techniques for measuring the models' accuracy, reliability, and robustness.
  • Incorporate feedback loops to continuously refine and improve the models based on real-world outcomes and user feedback.

Contextual Understanding:

  • Incorporate contextual understanding capabilities into the AI application to interpret and analyze data within its proper context.
  • Leverage techniques such as entity resolution, semantic analysis, and contextual reasoning to enhance the accuracy and relevance of the analysis.

Feedback and Iteration:

  • Implement mechanisms for analysts to provide feedback on the analysis results and the performance of the AI application.
  • Incorporate an iterative development process to continuously refine and improve the application based on user feedback and changing requirements.

Data Privacy and Security:

  • Ensure the AI application adheres to privacy regulations and security best practices.
  • Implement data anonymization techniques, access controls, and encryption methods to protect sensitive information processed by the application.

Scalability and Performance:

  • Design the AI application to manage large volumes of data and accommodate growing analytical needs.
  • Consider using distributed computing, parallel processing, and cloud-based infrastructure to enhance scalability and performance.

Domain-Specific Adaptation:

  • Customize the AI application to address the specific requirements and characteristics of the domain or intended industry.
  • Adapt the algorithms, models, and interfaces to align with the unique challenges and nuances of the targeted domain.

Human-in-the-Loop:

  • Incorporate human-in-the-loop capabilities to ensure human oversight and control in the analysis process.
  • Enable analysts to review and validate the AI-generated insights, refine hypotheses, and make final judgments based on their expertise.

Explain ability and Transparency:

  • Provide explanations and justifications for the analysis outcomes generated by the AI application.
  • Incorporate techniques for model interpretability and the ability to explain to enhance trust and transparency in the analysis process.

Continuous Learning:

  • Implement mechanisms for the AI application to continuously learn and adapt based on new data, evolving patterns, and user feedback.
  • Enable the application to update its models, algorithms, and knowledge base to improve accuracy and performance over time.
  • To effectively automate intelligence analysis using the various techniques and considerations mentioned, you can follow these steps:
    • Identify your specific analysis requirements: Determine the goals, scope, and objectives of your intelligence analysis. Understand the types of data, sources, and techniques that are relevant to your analysis domain.
    • Design the architecture and infrastructure: Plan and design the architecture for your automated intelligence analysis system. Consider scalability, performance, security, and privacy aspects. Determine whether on-premises or cloud-based infrastructure suits your needs.
    • Data collection and preprocessing: Set up mechanisms to collect relevant data from various sources, including structured and unstructured data. Implement preprocessing techniques such as data cleaning, normalization, and feature extraction to prepare the data for analysis.
    • Apply machine learning and AI algorithms: Utilize machine learning and AI algorithms to automate distinct aspects of intelligence analysis, such as data classification, clustering, anomaly detection, natural language processing, and predictive modeling. Choose and train models that align with your specific analysis goals.
    • Implement SATs and decision frameworks: Integrate the structured analytic techniques (SATs) and decision frameworks into your automation system. Develop modules or workflows that guide analysts through the application of SATs at appropriate stages of the analysis process.
    • Develop visualization and reporting capabilities: Create interactive visualizations, dashboards, and reports that present the analysis results in a user-friendly and easily interpretable manner. Incorporate features that allow analysts to drill down into details, explore relationships, and generate customized reports.
    • Human-in-the-loop integration: Implement human-in-the-loop capabilities to ensure human oversight, validation, and refinement of the automated analysis. Allow analysts to review and validate the automated insights, make judgments based on their expertise, and provide feedback for model improvement.
    • Continuous learning and improvement: Establish mechanisms for continuous learning and improvement of your automation system. Incorporate feedback loops, model retraining, and knowledge base updates based on new data, evolving patterns, and user feedback.
    • Evaluate and validate the system: Regularly assess the performance, accuracy, and effectiveness of the automated intelligence analysis system. Conduct validation exercises to compare automated results with manual analysis or ground truth data. Continuously refine and optimize the system based on evaluation outcomes.
    • Iterative development and collaboration: Foster an iterative and collaborative approach to development. Involve analysts, subject matter experts, and stakeholders throughout the process to ensure the system meets their needs and aligns with the evolving requirements of intelligence analysis.
    • Compliance and security considerations: Ensure compliance with relevant regulations, privacy guidelines, and security best practices. Implement measures to protect sensitive data and prevent unauthorized access to the automated analysis system.
    • Training and adoption: Provide appropriate training and support to analysts to familiarize them with the automated intelligence analysis system. Encourage adoption and utilization of the system by demonstrating its benefits, efficiency gains, and the value it adds to the analysis process.

By following these steps, you can integrate and automate various techniques, considerations, and SATs into a cohesive intelligence analysis system. The system will leverage machine learning, AI algorithms, visualization, and human-in-the-loop capabilities to streamline the analysis process, improve efficiency, and generate valuable insights.

Automatic Report Generation

We suggest you consider following the automatically generated analytic reports once you have integrated SATs into the intelligence analysis process. To do so:

  • Define report templates: Design and define the structure and format of the analytic reports. Determine the sections, subsections, and key components for report inclusion based on the analysis requirements and desired output.
  • Identify report generation triggers: Determine the triggers or conditions that initiate the report generation process. This could be based on specific events, time intervals, completion of analysis tasks, or any other relevant criteria.
  • Extract relevant insights: Extract the relevant insights and findings from the analysis results generated by the automated intelligence analysis system. This includes key observations, patterns, trends, anomalies, and significant relationships identified through the application of SATs.
  • Summarize and contextualize the findings: Summarize the extracted insights in a concise and understandable manner. Provide the necessary context and background information to help readers comprehend the significance and implications of the findings.
  • Generate visualizations: Incorporate visualizations, charts, graphs, and diagrams that effectively represent the analysis results. Choose appropriate visualization techniques to present the data and insights in a visually appealing and informative way.
  • Generate textual descriptions: Automatically generate textual descriptions that elaborate on the findings and insights. Utilize natural language generation techniques to transform the extracted information into coherent and readable narratives.
  • Ensure report coherence and flow: Ensure you logically organize report sections and subsections to flow smoothly. Maintain consistency in language, style, and formatting throughout the report to enhance readability and comprehension.
  • Include supporting evidence and references: Include references to the supporting evidence and data sources used in the analysis. Provide links, citations, or footnotes that enable readers to access the underlying information for further investigation or validation.
  • Review and edit generated reports: Implement a review and editing process to refine the automatically generated reports. Incorporate mechanisms for human oversight to ensure accuracy, coherence, and adherence to quality standards.
  • Automate report generation: Develop a module or workflow that automates the report generation process based on the defined templates and triggers. Configure the system to generate reports at specified intervals or to meet triggered conditions.
  • Distribution and sharing: Establish mechanisms for distributing and sharing the generated reports with relevant stakeholders. This could involve email notifications, secure file sharing, or integration with collaboration platforms for seamless access and dissemination of the reports.
  • Monitor and improve report generation: Continuously monitor the generated reports for quality, relevance, and user feedback. Collect feedback from users and recipients to identify areas for improvement and iterate on the report generation process.

By following these steps, you can automate the generation of analytic reports based on the insights and findings derived from the integrated SATs in your intelligence analysis process. This streamlines the reporting workflow, ensures consistency, and enhances the efficiency of delivering actionable intelligence to decision-makers.

Copyright 2023 Treadstone 71

Analyzing Targeted Cyber-HUMINT

Summary

Analyzing targeted Cyber-Human Intelligence (HUMINT) involves automatically gathering, processing, and analyzing human-derived information to gain insights into adversary cyber activities. The automation of HUMINT analysis presents challenges due to its human-centric nature, but there are some steps you can take to enhance efficiency. The general approach is to identify relevant sources of targeted cyber HUMINT, develop automated mechanisms to collect information from identified sources, apply text mining and natural language processing (NLP) to automatically process and analyze the collected data, combine the collected data with other sources of intelligence, contextual analysis, cross-reference and verification, threat actor profiling, visualization and reporting, and continuous monitoring and update.

Analyzing targeted cyber–Human Intelligence (HUMINT) involves automatically gathering, processing, and analyzing human-derived information to gain insights into adversary cyber activities. While the automation of HUMINT analysis presents challenges due to its human-centric nature, there are some steps you can take to enhance efficiency. Here is a general approach:

  1. Source Identification: Identify relevant sources of targeted cyber HUMINT, such as cybersecurity researchers, intelligence agencies, open-source intelligence (OSINT) providers, industry experts, insiders, or online forums. Maintain a curated list of sources consistently providing reliable and credible information on adversary cyber activities.
  2. Data Collection and Aggregation: Develop automated mechanisms to collect information from identified sources. This may involve monitoring blogs, social media accounts, forums, and specialized websites for discussions, reports, or disclosures related to adversary cyber operations. Use web scraping, RSS feeds, or APIs to gather data from these sources.
  3. Text Mining and Natural Language Processing (NLP): Apply text mining and NLP techniques to automatically process and analyze the collected HUMINT data. Use tools like sentiment analysis, named entity recognition, topic modeling, and language translation to extract relevant information, sentiments, key entities, and themes related to adversary cyber activities.
  4. Information Fusion: Combine the collected HUMINT data with other sources of intelligence, such as technical data, threat intelligence feeds, or historical cyber-attack data. This fusion helps in cross-referencing and validating information, providing a more comprehensive understanding of adversary cyber operations.
  5. Contextual Analysis: Develop algorithms that can understand the contextual relationships between different pieces of information. Analyze the social, political, and cultural factors that may influence adversary cyber activities. Consider geopolitical developments, regional conflicts, sanctions, or other factors that could impact their motivations and tactics.
  6. Cross-Referencing and Verification: Cross-reference the collected HUMINT with other credible sources to verify the accuracy and reliability of the information. This may involve comparing information across multiple sources, validating claims with technical indicators, or collaborating with trusted partners to gain additional insights.
  7. Threat Actor Profiling: Create profiles of adversary threat actors based on the HUMINT information collected. This includes identifying key individuals, groups, or organizations involved in adversary cyber operations, their affiliations, tactics, techniques, and objectives. Use machine learning algorithms to identify patterns and behaviors associated with specific threat actors.
  8. Visualization and Reporting: Develop visualizations and reporting mechanisms to present the analyzed HUMINT data in a digestible format. Interactive dashboards, network diagrams, and timelines can help understand the relationships, timelines, and impact of adversary cyber activities. Generate automated reports highlighting key findings, emerging trends, or notable developments.
  9. Continuous Monitoring and Update: Establish a system to continuously monitor and update the automated analysis process. Keep track of new sources of HUMINT, update algorithms as needed, and incorporate feedback from analysts to improve the accuracy and relevance of the automated analysis. 
    1. Define Key Performance Indicators (KPIs): Identify the key metrics and indicators that will help you assess the performance and impact of your automated analysis processes. These could include metrics related to data accuracy, timeliness, false positives/negatives, detection rates, and analyst productivity. Establish clear goals and targets for each KPI.
    2. Establish Data Feedback Loops: Develop mechanisms to collect feedback from analysts, users, or stakeholders who interact with the automated analysis system. This feedback can provide valuable insights into the system's strengths, weaknesses, and areas for improvement. Consider implementing feedback mechanisms such as surveys, user interviews, or regular meetings with the analyst team.
    3. Regular Data Quality Assurance: Implement procedures to ensure the quality and integrity of the data used by the automated analysis processes. This includes verifying the data sources' accuracy, assessing the collected information's reliability, and conducting periodic checks to identify any data inconsistencies or issues. Address data quality concerns promptly to maintain the reliability of your analysis.
    4. Continuous Algorithm Evaluation: Regularly evaluate the performance of the algorithms and models used in the automated analysis processes. Monitor their accuracy, precision, recall, and other relevant metrics. Employ techniques like cross-validation, A/B testing, or comparison with ground truth data to assess the performance and identify areas for improvement. Adjust algorithms as necessary based on the evaluation results.
    5. Stay Abreast of the Threat Landscape: Maintain up-to-date knowledge of the evolving threat landscape, including emerging threats, tactics, techniques, and procedures (TTPs) employed by threat actors, including Iranian cyber operations. Monitor industry reports, research papers, threat intelligence feeds, and information-sharing communities to stay informed about the latest developments. Update your analysis processes accordingly to reflect new threats and trends.
    6. Regular System Updates and Upgrades: Keep the automated analysis system updated with the latest software versions, security patches, and enhancements. Regularly assess the system's performance, scalability, and usability to identify areas that require improvement. Implement updates and feature enhancements to ensure the system's effectiveness and usability over time.
    7. Collaboration and Knowledge Sharing: Foster collaboration and knowledge sharing among your analysts and the cybersecurity community. Encourage sharing of insights, lessons learned, and best practices related to automated analysis. Participate in industry events, conferences, and communities to gain exposure to new techniques, tools, and approaches in automated analysis.
    8. Continuous Training and Skill Development: Provide regular training and skill development opportunities for analysts involved in the automated analysis processes. Keep them updated with the latest techniques, tools, and methodologies relevant to their work. Encourage professional development and ensure that analysts have the necessary skills to effectively utilize and interpret the automated system's results.
    9. Iterative Improvement: Continuously refine and improve the automated analysis processes based on feedback, evaluations, and lessons learned. Implement a feedback loop that allows for continuous improvement, with regular review cycles to identify areas where the system can be optimized. Actively seek input from analysts and stakeholders to ensure the system evolves to meet their evolving needs.

By following these steps, you can establish a robust and adaptable system that continuously monitors and updates your automated analysis processes, ensuring their effectiveness and relevance in the dynamic cybersecurity landscape.

How to hone your algorithms to ensure maximum operability?

Copyright 2023 Treadstone 71

Regularly Evaluate Algorithm Performance

Regularly evaluating the performance of algorithms and models used in automated analysis processes is crucial to ensure their effectiveness and find areas for improvement.

Cross-Validation: Split your dataset into training and testing subsets and use cross-validation techniques such as k-fold or stratified cross-validation. This allows you to assess the model's performance on multiple subsets of the data, reducing the risk of overfitting or underfitting. Measure relevant metrics such as accuracy, precision, recall, F1-score, or area under the curve (AUC) to evaluate the model's performance.

Confusion Matrix: Construct a confusion matrix to visualize your model’s performance. The confusion matrix shows the true positive, true negative, false positive, and false negative predictions made by the model. You can calculate various metrics from the confusion matrix such as accuracy, precision, recall, and F1-score, which supply insights into the model's performance for different classes or labels.

Receiver Operating Characteristic (ROC) Curve: Use the ROC curve to evaluate the performance of binary classification models. The ROC curve plots the true positive rate against the false positive rate at various classification thresholds. The AUC score derived from the ROC curve is a commonly used metric to measure the model's ability to distinguish between classes. A higher AUC score shows better performance.

Precision-Recall Curve: Consider using the precision-recall curve for imbalanced datasets or scenarios where the focus is on positive instances. This curve plots precision against recall at various classification thresholds. The Curve provides insights into the trade-off between precision and recall and can be helpful in assessing model performance when class distribution is uneven.

Comparison with Baseline Models: Set up baseline models representing simple or naive approaches to the problem you are trying to solve. Compare the performance of your algorithms and models against these baselines to understand the added value they provide. This comparison helps assess the relative improvement achieved by your automated analysis processes.

A/B Testing: If possible, conduct A/B testing by running multiple versions of your algorithms or models simultaneously and comparing their performance. Randomly assign incoming data samples to different versions and analyze the results. This method allows you to measure the impact of changes or updates to your algorithms and models in a controlled and statistically significant manner.

Feedback from Analysts and Subject Matter Experts: Seek feedback from analysts and experts working closely with the automated analysis system. They can provide insights based on their domain expertise and practical experience. Collect feedback on the accuracy, relevance, and usability of the results generated by the algorithms and models. Incorporate their input to refine and improve the performance of the system.

Continuous Monitoring: Implement a system to monitor the ongoing performance of your algorithms and models in real time. This can include monitoring metrics, alerts, or anomaly detection mechanisms. Track key performance indicators (KPIs) and compare them against predefined thresholds to identify any degradation in performance or anomalies that may require investigation.

We believe it is important to evaluate the performance of your algorithms and models on a regular basis, considering the specific objectives, datasets, and evaluation metrics relevant to your automated analysis processes. By employing these methods, you can assess the performance, identify areas for improvement, and make informed decisions to enhance the effectiveness of your automated analysis system.

Copyright 2023 Treadstone 71

Developing Automated Report-Generation Capabilities

Developing automated report-generation capabilities involves at least the following steps.

  1. Define Report Requirements: Start by deciding the purpose and scope of the reports you want to generate. Identify the target audience, the information they need, and the desired format and presentation style. This will help you set up clear goals and guidelines for the automated report-generation process.
  2. Identify Data Sources: Determine the data sources that will provide the necessary information for the reports. This can include threat intelligence feeds, security logs, vulnerability assessment results, incident response data, and any other relevant sources. Ensure you have automated mechanisms to collect and process this data.
  3. Design Report Templates: Create report templates that define the reports’ structure, layout, and content. Consider the specific requirements of your target audience and tailor the templates accordingly. This may involve selecting proper visualizations, charts, graphs, and textual elements to present the information effectively.
  1. Data Aggregation and Analysis: Develop automated processes to aggregate and analyze the data from the identified sources. This may involve integrating with data processing and analytics tools to extract relevant information, perform calculations, and generate insights. Use data filtering, aggregation, and statistical analysis techniques to derive meaningful findings.
  2. Report Generation Logic: Define the logic and rules for generating reports based on the analyzed data. This includes specifying the report generation frequency, deciding the time covered by each report, and setting thresholds or criteria for including specific information. For example, you may configure rules to include only high-priority threats or vulnerabilities that meet certain risk criteria.
  3. Report Generation Workflow: Design the workflow for report generation, which outlines the sequence of steps and processes involved. Determine the triggers or schedule for initiating report generation, data retrieval and processing, analysis, and template population. Ensure that the workflow is efficient, dependable, and well-documented.
  4. Automation Implementation: Develop the necessary automation scripts, modules, or applications to implement the report generation process. This may involve scripting languages, programming frameworks, or dedicated reporting tools. Leverage APIs, data connectors, or direct database access to retrieve and manipulate the required data.
  5. Report Customization Options: Provide customization options to allow users to tailor the reports to their specific needs. This can include parameters for selecting data filters, time ranges, report formats, or visualizations. Implement a user-friendly interface or command-line options to facilitate customization.
  6. Testing and Validation: Thoroughly evaluate the automated report generation process to ensure its accuracy, reliability, and performance. Validate that the generated reports align with the defined requirements and produce the desired insights. Conduct test runs using various data scenarios to identify and resolve any issues or inconsistencies.
  7. Deployment and Maintenance: Once you develop and validate the automated report generation capabilities, deploy the system to the production environment. Regularly monitor and maintain the system to address any updates or changes in data sources, report requirements, or underlying technologies. Seek feedback from users and incorporate enhancements or refinements based on their needs.

By following these steps, you can develop automated report generation capabilities that streamline the process of producing comprehensive and actionable reports, saving time and effort for your cybersecurity teams and stakeholders.

Copyright 2023 Treadstone 71 

Automating Cyber Intelligence Analysis

Automating cyber intelligence analysis involves using technology and data-driven approaches to gather, process, and analyze large volumes of information. While complete automation of the analysis process may not be possible due to the complex nature of cyber threats, there are several steps you can take to enhance efficiency and effectiveness. Here is a high-level overview of how you could approach automating cyber intelligence analysis:

  1. Data Collection: Develop automated mechanisms to collect data from various sources, such as security logs, threat intelligence feeds, social media platforms, dark web sources, and internal network telemetry. We may use APIs, web scraping, data feeds, or specialized tools as data collectors.
  2. Data Aggregation and Normalization: combine and normalize the collected data into a structured format to help analysis. This step involves converting diverse data formats into a unified schema and enriching the data with relevant contextual information.
  3. Threat Intelligence Enrichment: Leverage threat intelligence feeds and services to enrich the collected data. This enrichment process can include gathering information about known threats, indicators of compromise (IOCs), threat actor profiles, and attack techniques. This helps in attributing and contextualizing the collected data.
  4. Machine Learning and Natural Language Processing (NLP): Apply machine learning and NLP techniques to analyze unstructured data, such as security reports, articles, blogs, and forum discussions. These techniques can help find patterns, extract relevant information, and categorize data based on the identified themes.
  1. Threat Detection and Prioritization: Use automated algorithms and heuristics to find potential threats and prioritize them based on their severity, relevance, and impact. This could involve correlating collected data with known indicators of compromise, network traffic analysis, and anomaly detection.
  2. Visualization and Reporting: Develop interactive dashboards and visualization tools to present the analyzed information in a user-friendly format. These visualizations can provide real-time insights into threat landscapes, attack trends, and potential vulnerabilities, helping decision-making.
  3. Incident Response Automation: Integrate incident response platforms and security orchestration tools to automate incident handling processes. This includes automated notification, alert triaging, remediation workflows, and collaboration among security teams.
  4. Continuous Improvement: Continuously refine and update the automated analysis system by incorporating feedback from security analysts, monitoring emerging threat trends, and adapting to changes in the cybersecurity landscape.
  5. Threat Hunting Automation: Implement automated threat-hunting techniques to proactively search for potential threats and indicators of compromise within your network. This involves using behavioral analytics, anomaly detection algorithms, and machine learning to identify suspicious activities that may indicate a cyber-attack.
  6. Contextual Analysis: Develop algorithms that can understand the context and relationships between different data points. This could include analyzing historical data, identifying patterns across various data sources, and correlating seemingly unrelated information to uncover hidden connections.
  7. Predictive Analytics: Use predictive analytics and machine learning algorithms to forecast future threats and anticipate potential attack vectors. By analyzing historical data and threat trends, you can identify emerging patterns and predict the likelihood of specific cyber threats occurring.
  8. Automated Threat Intelligence Platforms: Adopt specialized threat intelligence platforms that automate the collection, aggregation, and analysis of threat intelligence data. These platforms use AI and machine learning algorithms to process vast amounts of information and provide actionable insights to security teams.
  9. Automated Vulnerability Management: Integrate vulnerability scanning tools with your automated analysis system to identify vulnerabilities within your network. This helps prioritize patching and remediation efforts based on the potential risk they pose.
  10. Chatbot and Natural Language Processing (NLP): Develop chatbot interfaces that use NLP techniques to understand and respond to security-related inquiries. These chatbots can assist security analysts by providing real-time information, answering often asked questions, and guiding them through the analysis process.
  11. Threat Intelligence Sharing: Take part in threat intelligence sharing communities and use automated mechanisms to exchange threat intelligence data with trusted partners. This can help in gaining access to a broader range of information and collective defense against evolving threats.
  12. Security Automation and Orchestration: Implement security orchestration, automation, and response (SOAR) platforms that streamline incident response workflows and automate repetitive tasks. These platforms can integrate with various security tools and leverage playbooks to automate incident investigation, containment, and remediation processes.
  13. Threat Hunting Automation: Implement automated threat hunting techniques to proactively search for potential threats and indicators of compromise within your network. This involves using behavioral analytics, anomaly detection algorithms, and machine learning to identify suspicious activities that may indicate a cyber-attack.
  14. Contextual Analysis: Develop algorithms that can understand the context and relationships between different data points. This could include analyzing historical data, identifying patterns across various data sources, and correlating seemingly unrelated information to uncover hidden connections.
  15. Predictive Analytics: Use predictive analytics and machine learning algorithms to forecast future threats and anticipate potential attack vectors. By analyzing historical data and threat trends, you can identify emerging patterns and predict the likelihood of specific cyber threats occurring.
  16. Automated Threat Intelligence Platforms: Adopt specialized threat intelligence platforms that automate the collection, aggregation, and analysis of threat intelligence data. These platforms use AI and machine learning algorithms to process vast amounts of information and provide actionable insights to security teams.
  17. Automated Vulnerability Management: Integrate vulnerability scanning tools with your automated analysis system to identify vulnerabilities within your network. This helps prioritize patching and remediation efforts based on the potential risk they pose.
  18. Chatbot and Natural Language Processing (NLP): Develop chatbot interfaces that use NLP techniques to understand and respond to security-related inquiries. These chatbots can assist security analysts by providing real-time information, answering frequently asked questions, and guiding them through the analysis process.
  19. Threat Intelligence Sharing: Take part in threat intelligence sharing communities and use automated mechanisms to exchange threat intelligence data with trusted partners. This can help in gaining access to a broader range of information and collective defense against evolving threats.
  20. Security Automation and Orchestration: Implement security orchestration, automation, and response (SOAR) platforms that streamline incident response workflows and automate repetitive tasks. These platforms can integrate with various security tools and leverage playbooks to automate incident investigation, containment, and remediation processes.

Copyright 2023 Treadstone 71 

STEMPLES Plus as a Framework to Assess Cyber Capabilities

STEMPLES Plus is a framework used to assess the cyber capabilities of a country. STEMPLES Plus stands for Social, Technical, Economic, Military, Political, Legal, Educational, and Security (internal) factors, with "Plus" referring to additional factors such as Culture, Education, and Organizational structures. Treadstone 71 uses the STEMPLES Plus framework to assess an adversary country's cyber capabilities from the standpoint of their ability to execute various cyber operations against us.

Social Factors: Evaluate the social factors influencing a country's cyber capabilities. This includes the level of awareness and digital literacy among the population, the presence of skilled cybersecurity professionals, public perception of cybersecurity, and the level of cooperation between the government, private sector, and civil society in addressing cyber threats.

Technical Factors: Assess the technical aspects of a country's cyber capabilities. This involves evaluating the sophistication of the country's technological infrastructure, the availability of advanced cybersecurity tools and technologies, research and development efforts in cybersecurity, and the level of expertise in emerging technologies such as artificial intelligence, blockchain, or quantum computing.

Economic Factors: Examine the economic factors contributing to a country's cyber capabilities. Evaluate the investment in cybersecurity research and development, the presence of cybersecurity-related industries and businesses, the level of cybersecurity maturity in critical sectors, and the economic impact of cyber threats on the country's economy.

Military Factors: Evaluate the military aspects of a country's cyber capabilities. This includes assessing the presence and capabilities of dedicated military cyber units, the integration of cyber capabilities into military strategies and doctrines, the level of investment in cyber defense and offense capabilities, and the country's cyber warfare capabilities.

Political Factors: Analyze the political factors that shape a country's cyber capabilities. This involves assessing the government's commitment to cybersecurity, the existence of national cybersecurity strategies and policies, the legal framework governing cyber activities, international cooperation on cyber issues, and the country's diplomatic posture on cyber matters.

Legal Factors: Examine the legal framework governing cyber activities in the country. Evaluate the adequacy of laws and regulations related to cybersecurity, data protection, privacy, intellectual property, and cybercrime. Assess the enforcement mechanisms, legal procedures, and international legal obligations related to cyber activities.

Educational Factors: Consider the educational aspects of a country's cyber capabilities. This includes assessing academic commitments to cyber security, hybrid warfare, cognitive warfare, influence operations cyber intelligence and counterintelligence in conducting cyber operations, the country's commercial environment related to cyber conferences, information sharing, associations, ethical hacking groups, and awareness. 

  • Security Factors: Incorporate security factors to assess the country's overall security posture, including the robustness of critical infrastructure protection, incident response capabilities, cybersecurity education and awareness programs, and the resilience of the country's cybersecurity ecosystem.
  • Religion: Assess the influence of religion on cybersecurity practices, policies, and attitudes within the country. Examine how religious beliefs and values may impact the perception of cybersecurity, privacy, and the use of technology.
  • Demographics: Analyze the demographic factors that can affect cyber capabilities, such as the size and diversity of the population, the level of digital literacy, the availability of skilled cybersecurity professionals, and the digital divide among different demographic groups.
  • Social Psychology: Consider social psychology factors that can influence cybersecurity practices, including trust, social norms, group dynamics, and individual behaviors. Analyze how social psychological factors may shape attitudes towards cybersecurity, data privacy, and adherence to security practices.
  • Strategic Factors: Evaluate the strategic dimensions of a country's cyber capabilities. This involves analyzing the country's long-term goals, priorities, and investments in cybersecurity, its cyber defense posture, offensive capabilities, and cyber intelligence capabilities. Assess the integration of cyber capabilities into national security strategies and the alignment of cyber objectives with broader geopolitical interests.

Additionally, we use the "Plus" factors in STEMPLES Plus—Culture, Education, and Organizational structures to provide additional insights into a country's cyber capabilities. These factors help assess the cultural attitudes toward cybersecurity, the state of cybersecurity education and training programs, and the organizational structures and collaborations that drive cybersecurity initiatives within the country.

By systematically analyzing the STEMPLES Plus factors, you can comprehensively understand a country's cyber capabilities, strengths, and weaknesses. This assessment can inform policy decisions, threat modeling, and the development of effective cybersecurity strategies and countermeasures.

By incorporating "Religion, Demographics, and Social Psychology" into the STEMPLES Plus framework, you can better understand a country's cyber capabilities and the contextual factors that influence them. This expanded framework helps capture the societal and human aspects that play a role in cybersecurity practices, policies, and attitudes within a given country.

 Copyright 2023 Treadstone 71 LLC

Iranian Influence Operations

Iranian Influence Operations - July 2020

Treadstone 71 monitors Iranian cyber and influence operations. On July 17, 2020, we noticed spikes in Twitter activity surrounding specific hashtags. The primary hashtag (مريم_رجوي_گه_خورد ) targeted Maryam Rajavi. For example, Maryam Rajavi is the leader of the People's Mujahedin of Iran, an organization trying to overthrow the Iranian government, and the President-elect of its National Council of Resistance of Iran (NCRI).[1] July 17, 2020, represents the #FreeIran2020 Global Summit online for the NCRI. The below report represents our assessment of an Iranian influence operation targeting the July 17, 2020 event.

Assessment

Treadstone 71 assesses with high confidence that the Iranian government, likely the Ministry of Intelligence and Security (MOIS) using Basiji cyber team members, executed an influence operation targeting the NCRI and the July 17, 2020, online conference.

 The intent of the 111,770 tweets likely included:[2]

  • The need to present malicious content about the NCRI during the summit.
  • Preventing in-country Iranian citizens from seeing NCRI content.
  • Causing chaos and confusion amongst NCRI members and Iranian citizens.
  • Emphasize divisions amongst content viewers.
  • Hashtag cloning to control the narrative.

The MOIS effort is seemingly disjointed but, in fact, is a highly coordinated disinformation campaign. The program involves many fake accounts posting hundreds of tweets during a specific time. The posts use hashtags and direct targeting of political figures to gain maximum attention and, subsequently, more retweets.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

Information diversions in the conflict in Ukraine

To identify and classify the forms and methods of information warfare in the modern conflict in Ukraine (in the context of the war in Ukraine).

Procedures and methods. The study was carried out using the methods of analysis, synthesis, generalization and interpretation of the results.

Results. The forms and methods of conducting information warfare in Ukraine under the conditions of the war (strategic information operations, special propaganda, fakes and operational games) are identified and classified. with elites) it is shown that in terms of intensity the main place in the information the struggle of the participants in the conflict is occupied by special propaganda, goals and methods which have not changed since the Cold War; strategic information operations, which are operational combinations of foreign
intelligence, in this conflict at the present stage, are present only in the form of the so-called incident in Bucha.

Continue Reading

Iranian Link Analysis - Threat actors across the spectrum

Iranian Link Analysis of various cyber threat actors. Download the eye opening report here.

Mr.Tekide Baseball Card

Much has been written about Mr.Tekide and his crypters used by APT34 (OilRig) and others. Other

organizations have documented information about Mr.Tekide's tools in 'celebrated' cyber attacks against Fortune 500 institutions, governments, educational organizations, and critical infrastructure entities.

Identification

However, identifying Mr.Tekide, his background, locations, and his own words has never been openly accomplished. Many believe that following an individual does not pay dividends. Treadstone 71 demonstrates the alignment of Mr.Tekide to the Iranian government through years of support using crypters such as the iloveyoucrypter, qazacrypter, and njRAT.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

Cyber Intelligence Request for Information (RFI)

Request for Information (RFI) – Cyber Threat Intelligence

The RFI process includes any specific time-sensitive ad hoc requirement for intelligence information or products to support an ongoing event or incident not necessarily related to standing requirements or scheduled intelligence production. When the Cyber Threat Intelligence Center (CTIC) submits an RFI to internal groups, there is a series of standard requirements for the context and quality of the data requested.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

High Level Benefits of the Cyber and Threat Intelligence Program Build Service

Our training examines Sherman Kent's Analytic Doctrine from the cyber perspective as well as the availability and use of OSINT tools. Students are able to understand the cyber intelligence lifecycle, the role and value of cyber intelligence relative to online targeting and collection, in modern organizations, businesses, and governments at the completion of this course and, use of our advisory services.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

The Treadstone 71 Difference

What you receive from Treadstone 71 is detailed information and intelligence on your adversary that far surpasses the technical realm. Where Treadstone 71 service excels is in the ability to provide you with techniques, methods, capabilities, functions, strategies, and programs to not only build a fully functional intelligence capability, but a sustainable program directly aligned with stakeholder requirements.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

What Intelligence Can and Cannot Do

This intelligence brief explains the intricacies as well as cans and cannots with repect to the capabilities of cyber intelligence.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

Stakeholder Analysis

Understanding your stakeholders and what they need to help make decisions is more than half the battle. This brief covers the old adage “Know your professor, get an A."

Download Brief

Please provide a valid email address to access your download.

Continue... ×
النسخة العربية

Please provide a valid email address to access your download.

Continue... ×

Bulletproof Vests – Make them yourself

Syrian violations of sanctions with Russian FSB assistance to manufacture ballistic vests – Not discovered by any organization other than Treadstone 71 - No sensors, no aggregation of thousands of taps – Just hard-nosed open-source collection and analysis, and an interesting read of false identities, dispersed purchasing, and deceit.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

Middle Eastern Cyber Domain

Middle Eastern Cyber Domain – Iran/Syria/Israel

An academic review of these nation-states and their work to achieve cyber operations dominance.

Download Brief

Please provide a valid email address to access your download.

Continue... ×
النسخة العربية

Please provide a valid email address to access your download.

Continue... ×

Intelligence Games in the Power Grid

Intelligence Games in the Power Grid – Russian Cyber and Kinetic Actions Causing Risk

Unusual purchasing patterns from a Russian firm selling PLCs from a Taiwanese company with massive holes in its product software download site. What could go wrong?

Download Brief

Please provide a valid email address to access your download.

Continue... ×

Statement of Cyber Counterintelligence

Statement of Cyber Counterintelligence The 10 Commandments for Cyber CounterIntel

Thou shall and thou shalt not. Own the cyber street while building creds. Follow these rules and maybe you will survive the onslaught.

Download Brief

Please provide a valid email address to access your download.

Continue... ×

Fallacies in Threat Intelligence

Fallacies in Threat Intelligence Lead to Fault Lines in Organizational Security Postures

This brief covers some general taxonomy along with a review of common mistakes concerning cyber and threat intelligence and how possible to not fall into these traps while knowing how to dig out if you do.

Download Brief

Please provide a valid email address to access your download.

Continue... ×
النسخة العربية

Please provide a valid email address to access your download.

Continue... ×

Contact Treastone 71

Contact Treadstone 71 Today. Learn more about our Targeted Adversary Analysis, Cognitive Warfare Training, and Intelligence Tradecraft offerings.

Contact us today!