Keep Scrolling

CORE™

A Killer Engine that Powers Incredible Solutions

With CORE™, our AI-powered Cognitive Intelligence Machine, your data can give you certainty. Instead of running from complexity, CORE sees your volumes of complex data as a huge opportunity, and lets us run straight towards it. Because complex data is the raw material that we refine, step by step, from data to information to knowledge to understanding. Until we get the most valuable refined data of all: Wisdom. It’s wisdom that lets you see the Why of Things.

Credible, explainable AI is wholly dependent on one thing – high resolution data. Without it, AI is no better than anything you already have today. Our approach to data hygiene and enrichment gives companies and developers a foundation to see what others cannot, so they can build an entirely new class of AI solutions that will finally deliver on the promise of AI.

Data ingestion, integration and unification of all unstructured (text, audio, video) and structured (transactional) data

Date Ingestion, Integration and Unification – Now companies have one simple approach to integrating all sources of data so the data can be enriched and stored in one location – 360° single version of the truth. In addition, with integrations with the most popular platforms, we make it even easier to develop your company’s most valuable date asset.

Automated data, cleansing, data organization, noise filtering and base level enrichments so further insights can be generated without disruption

Cleansing and Parsing – One of the persistent challenges faced in the text analytics industry is cleaning data, organizing data and adding base level enrichments so further insights can be generated without disruption. Decooda is not immune to these issues, which is why we are continuously implementing new techniques that ensure dirty data is not allowed into your data reservoir. We filter out noise, spam or other irrelevant data that often pollutes data lakes. Once data is allowed into the machine, parsing windows can be selected (character frames, n-grams, punctuation, document, document percentages and selectable frames) so that we can optimize the assessment of different size documents (tweets, texts, articles, books, etc.) in order to accurately detect context and summarize insights.

Descriptors of data including: topics, emotions, impact, state-of-mind, sentiment, who, what, when, where, why, and how many

Auto Topic Discovery with or without a Human in the Loop – While we believe we have the best generalizable auto topic discovery engine, we felt it was important to offer a discovery process that was designed to be optimized for any domain (industry: customer experience, financial services, healthcare, etc.) and genre (sources: tweets, blogs, articles, reviews, email, transcripts, etc.) of data. This is a significant innovation because domain and genre level specificity and accuracy are what lead to the best analytics insights and AI fuel. We also offer three levels of training based on the requirement: standard models that require no training; unsupervised topic analysis in which the machine automatically identifies topics instantaneously without any human involvement; and human-in-the-loop training in which the human simply provides the machine with hints or training data to help the machine generate a more refined or specific set of topics.

DATA ENRICHMENT

  • Frequency Analysis: Determine the most common words and sequences of words in the texts and calculate their frequencies.
    • N-Gram Analysis: Find the most frequent n-grams in a corpus.
  • Contrastive Corpus Analysis: Compare two sets of texts and find the key topics that are unique to each set.
    • Trend Analysis: Analyze and identify how the most prominent topics in a corpus change over a particular timeframe.
    • Temporal Analysis: Analyze and identify the uniqueness of the most prominent topics in each and every timeframe.
  • Information Extraction: Analyze the linguistic structure of the text at a granular level. Extract meaningful chunks of the text that can provide knowledge at a contextual level.
    • Text Parsing: Determine the structure of a meaningful unit of text.
      • Morphological Parsing: Determine the structure of a word by its morphological units.
      • Dependency Parsing: Determine the relationship of words within a sentence based on their syntactic dependencies.
      • Semantic Parsing: Transform the meaning conveyed in a text into a formal logical representation.
      • Constituency Parsing: Determine the structure of phrases within a sentence.
      • Part of Speech Tagging: Determine the part of speech of each word (e.g., adjective, proper noun, adverb).
    • Entity Detection: Find the entities that occur in a text (e.g., person, place, company).
      • Named Entity Recognition: Extract the named entities, such as organizations, people, or companies.
      • Entity Relationships: Extract the relationships between the entities detected in the text.
      • Entity Linking: Normalize and disambiguate each entity found in the text so that they can be mapped to one entity.
    • Topic Detection: Determine which topics occur in a corpus.
      • Topic Modeling: With models making use of word embeddings, detect topics in a corpus.
      • Prolog based topic detection rules: Extract the relationships between the entities detected in the text.
  • Word Sense Disambiguation: Determine the intended meaning of ambiguous words or phrases given the context in which they occur.
    • Stance Detection: Determine the intended meaning within a text based on the source’s opinion.
    • Coreference Resolution: Determine what entities are being referred to for each pronoun.

Data relationships reveal perceived truths, facts, principles, patterns and skills related to a domain

Machine Generated Classifier Development: Most companies struggle to deliver reliable topic discovery because they are using probabilistic techniques like machine learning. These techniques have significant error bands implicit in their models that make it difficult to deliver accurate results (high precision and high recall), and even more so when the topics are nuanced. What’s innovative about Decooda’s technique is that we auto-generate a taxonomy of n-grams for the topics we detect. Why is this technique so great? It’s because taxonomies are not probabilistic, they are binary, and while most taxonomy approaches suffer with low recall, Decooda has created an approach that provides high precision and high recall – this is nirvana in the field of text analytics and AI data refineries.

  • Decooda Differentiated Classifications:
    • Emotions and Cognitive States: Determine the emotions and cognitive states conveyed in the input text.
    • Sentiment Analysis: Classify a text as positive or negative as well as having emotions or cognitive states present.
      • Aspect Level Sentiment Analysis: Detect the sentiment as it relates to a particular characteristic of a topic.
      • Entity Level Sentiment Analysis: Determine the sentiment as it relates to a particular entity that is mentioned in the text.
      • Opinion Mining: Determine the emotions, cognitive states, and tone that is present in the text.
  • Machine Assisted Classifier Development: As if Machine Generated Classifier Development wasn’t good enough, Decooda has taken it to the next level by implementing a workflow that allows the average human to become a text analytics expert. The average human can now refine and order classifiers down to the most granular and precise levels in minutes and hours instead of days and weeks.
  • Gen 2 Regular Expression Pattern Matching Classification: Taxonomy classifiers work well to identify key terms and find simple topics in text. However, using taxonomy classifiers to search for a topic within unstructured text strictly limits the results to specific terms. We leverage these taxonomies for their vocabulary and integrate them into regular expression classifiers to find topics despite the many linguistic variances that occur in speech and text. Using this method, we are able to account for tense, inflection, negation, and variations in spelling and word order with greater accuracy.
  • Machine Learning Classification (auto select 1 of the 6 ML techniques based on best fit – XG Boost, K Nearest Neighbor – KNN, Forrest and Trees, Support Vector Machine, Naïve Bayes, Neural Net): Decooda does not subscribe to a single machine learning religion. We are agnostic, and we always want to have access to the best tool for the job. Which is why we created a machine learning classification approach that allows users to simply upload training data, then the machine takes over and automatically goes through the feature engineering process, trains and runs all 6 ML techniques and auto-selects the best ML technique based on performance. If required, the user can then refine or reassign features based on need and re-run to assess results. This process can be repeated until the user is satisfied with the results.

Interpretation and comprehension of information knowledge in order to understand why events unfolded in the manner they did in order to understand context and predict results

Knowledge Graph – Decooda uses knowledge graphs to describe objects of interest and the relationships between them. For example, a knowledge graph may have nodes for technology companies, Apple being one of them, the management of Apple, the products they make, and so on. Each node may have properties such as the CEO’s name, Steve Jobs (link between CEO and Steve Jobs – “past” and Tim Cook (link between CEO and Tim Cook – “present”). Once knowledge is placed in the graph, the graph can be traversed to reveal that Apple is a technology company and Tim Cook is the current CEO.

The transition from efficiently improving the quality of data to effectively leveraging it to do the right thing with certainty

  • Proprietary Knowledge – Decooda uses proprietary knowledge to solve problems in the face of ambiguous or incomplete information. Using beliefs about domain specific entities (people, places and things), concepts and contexts, or domain knowledge, Decooda’s AI systems make default assumptions about the unknown similar to the way people do. As more knowledge of the world is discovered or learned over time, the AI system can update its assumptions using a knowledge maintenance process to ensure ongoing accuracy.
  • Gen 2 Regular Expression Virtual Context Classification – One of the challenges in the industry is that if a text analytics firm leans primarily on taxonomies or machine learning or regular expression for unstructured data analysis, they often try to approach every problem with the same tool POV. The reality is that no single technique is well suited to address every type of requirement. This problem is exacerbated if they try to use the same trained model across all genres of data. Decooda has now changed the game by offering a text analytics solution that can weave together, or layer, any combination of our classification techniques (taxonomies, Boolean, regular expression and machine learning, deep learning) in order to produce the most accurate results – Decooda’s agnostic approach allows you to use or construct the best solution for the job. This capability is tied back to the need for our clients to understand the context of the data they are analyzing. For example, understanding that a customer has the emotion of anger is very valuable, but it’s even more valuable when we combine this with an understanding of their negative purchase intent, the topic that is driving their emotion and the impact on their future behavior, which could become a “Customer Defection” classifier. This is the ultimate refined data that can inform you on exactly what you should do next.

Where refined data that generates certainty goes to create great insights and solutions

Access

User Defined Dashboards and Shared Reporting – Because beauty is in the eye of the beholder, we implemented a dashboard visualization engine that allows users to customize their visualizations based on their unique need. In addition, we have implemented an easy-to-use column totaling and cross-tab engine so users can immediately begin to analyze the relationships and insights in the data. When users have completed their analysis, insights can be embedded in report and shared across the enterprise.

Exporting Data to the Database of Your Choice – While your insights can be retained in Decooda’s data lake, it can also be streamed via our API into the data lake database of your choice (Hadoop, Spark, SnowFlake, S3, Redis, etc.). Our agnostic approach gives you the flexibility to layer any BI reporting technology into your architecture for additional enterprise reporting (PowerBI, Tableau, Clik, etc.).

Acceleration

Performance Enhancements through Acceleration – The perception of the market is that high-quality text analytics comes at a significant cost: access to skills to perform the task, time investment in classifier development, processing time, access to technology. Decooda has disrupted this model by simplifying the process so the average human can do the work of text analytics experts, AND through acceleration in our technology we can do the most herculean and impossible in real-time at an affordable cost – spend less and get more with Decooda.

Every customer’s state of mind in real-time with Decooda