API Endpoints

We provide a variety of Text Classification, Sentiment Analysis, Entity Extraction, and Summarization features that allow you to extract meaningful insight and understanding from textual content.

The majority of our users will use one or more endpoints combined, depending on their use case. For example, for full coverage in entity extraction use cases we recommend combining the Entity Extraction and Concept Extraction features.

Classification

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/classify \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d url"=http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"
textapi.classify({
  url: 'http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile'
}, function(error, response) {
  if (error === null) {
    response['categories'].forEach(function(c) {
      console.log(c);
    });
  }
});
url = "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"
classifications = client.Classify({"url": url})
for category in classifications['categories']:
  print(category)
<?php
$url = 'http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile';
$classifications = $textapi->Classify(array('url' => $url));
foreach($classifications->categories as $category) {
  var_dump($category);
}
?>
ClassifyParams.Builder builder = ClassifyParams.newBuilder();
java.net.URL url = new java.net.URL("http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile");
builder.setUrl(url);
Classifications classifications = client.classify(builder.build());
for (Category category: classifications.getCategories()) {
    System.out.println(category);
}
url = "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"

classifications = client.classify(url: url)

classifications[:categories].each do |category|
  puts category
end
url := "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"
classifyPrams := &textapi.ClassifyParams{URL: url}
classifications, err := client.Classify(classifyPrams)
if err != nil {
    panic(err)
}
for _, c := range classifications.Categories {
    fmt.Printf("%v\n", c)
}
string url = "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile";

Classify classifications = client.Classify(url: url);

foreach (var category in classifications.Categories)
{
  Console.Write(category.Label);
}

Knowing the high-level semantic category of an unlabelled document such as a webpage or article can be extremely helpful in various applications. The Classification endpoint helps you categorize any text or URL according to a predefined taxonomy.

HTTP Request

  • GET https://api.aylien.com/api/v1/classify
  • POST https://api.aylien.com/api/v1/classify

Parameters

Sample response (JSON):

{
  "language":"en",
  "categories":[
    {
      "label":"economy, business and finance - computing and information technology",
      "code":"04003000",
      "confidence":1
    }
  ],
  "text":"When Microsoft announced its wrenching..."
}
Parameter Data type Description Default
text string Text to classify
url string URL to classify
language string Language (refer to Language Support) auto

Classification by Taxonomy

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/classify/iab-qag \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d "url=http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"
textapi.classifyByTaxonomy({
  'url': 'http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile',
  'taxonomy': 'iab-qag'
}, function(error, response) {
  if (error === null) {
    response['categories'].forEach(function(c) {
      console.log(c);
    });
  }
});
url = "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"
classifications = client.ClassifyByTaxonomy({"url": url, "taxonomy": "iab-qag"})
for category in classifications['categories']:
  print(category)
<?php
$url = 'http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile';
$classifications = $textapi->ClassifyByTaxonomy(array("url" => $url, "taxonomy" => "iab-qag"));
foreach($classifications->categories as $category) {
  var_dump($category);
}
?>
ClassifyByTaxonomyParams.Builder builder = ClassifyByTaxonomyParams.newBuilder();
URL url = new URL("http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile");
builder.setUrl(url);
builder.setTaxonomy(ClassifyByTaxonomyParams.StandardTaxonomy.IAB_QAG);
TaxonomyClassifications response = client.classifyByTaxonomy(builder.build());
for (TaxonomyCategory c: response.getCategories()) {
  System.out.println(c);
}
url = "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"

response = client.classify_by_taxonomy url: url, taxonomy: "iab-qag"

puts response[:categories].map {|c| c[:label]}.join(', ')
url := "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile"
params := &textapi.ClassifyByTaxonomyParams{URL: url, Taxonomy: "iab-qag"}
classifications, err := client.ClassifyByTaxonomy(params)
if err != nil {
  panic(err)
}
for _, c := range classifications.Categories {
  fmt.Printf("%v\n", c)
}
string url = "http://techcrunch.com/2015/07/16/microsoft-will-never-give-up-on-mobile";

ClassifyByTaxonomy classifyByTaxonomy = client.ClassifyByTaxonomy("iab-qag", url: url);

foreach (var category in classifyByTaxonomy.Categories)
{
  Console.WriteLine(category.Label);
}

Knowing the high-level semantic category of an unlabelled document such as a web page or article can be extremely helpful in different applications. The Classification by Taxonomy endpoint helps you categorize any text or URL according to various classification schemes and taxonomies (see the Taxonomies section below).

Taxonomies

Our Classification by Taxonomy endpoint is capable of classifying content according to multiple taxonomies which can be selected by adding the ID of the taxonomy to the end of the /classify endpoint. Below you can see a list of these taxonomies and their definitions, and you can search the labels for each taxonomy on our News API documentation here.

Taxonomy Number of labels Levels of depth Commonly used for Taxonomy ID Definition
IPTC Subject Codes 1400 3 News articles, Blog posts iptc-subjectcode View
IAB QAG 392 2 Websites, Advertisement iab-qag View

Traversing Taxonomies

We have standardized all our of our supported taxonomies into a tree-like structure, which allows you to easily traverse from child categories to parent categories, recursively.

Each classification result contains an array of links, which contains links to the current taxonomy label (rel=self) as well as its parent(s), if any (rel=parent).

To retrieve the entire taxonomy, you can simple remove the category ID from the end of the link attribute, e.g. https://api.aylien.com/api/v1/classify/taxonomy/iab-qag.

HTTP Request

  • GET https://api.aylien.com/api/v1/classify/:taxonomy
  • POST https://api.aylien.com/api/v1/classify/:taxonomy

Parameters

Sample response (JSON):

{
  "categories": [
    {
      "confident": true,
      "id": "IAB19-36",
      "label": "Windows",
      "links": [
        {
          "link": "https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB19-36",
          "rel": "self"
        },
        {
          "link": "https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB19",
          "rel": "parent"
        }
      ],
      "score": 0.5675236066291172
    },
    {
      "confident": true,
      "id": "IAB19",
      "label": "Technology & Computing",
      "links": [
        {
          "link": "https://api.aylien.com/api/v1/classify/taxonomy/iab-qag/IAB19",
          "rel": "self"
        }
      ],
      "score": 0.46704140928338533
    }
  ],
  "language": "en",
  "taxonomy": "iab-qag",
  "text": "When Microsoft announced its wrenching..."
}
Parameter Data type Description Default
taxonomy string Taxonomy to classify the document according to. Valid values are iab-qag and iptc-subjectcode
text string Text to classify
url string URL to classify
language string Language (refer to Language Support) auto

Sentiment Analysis

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/sentiment \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d mode="tweet" \
   -d text="John+is+a+very+good+football+player"
textapi.sentiment({
  text: 'John is a very good football player',
  mode: 'tweet'
}, function(error, response) {
  if (error === null) {
    console.log(response);
  }
});
text = 'John is a very good football player'
sentiment = client.Sentiment({'text': text})
print(sentiment)
<?php
$text = 'John is a very good football player';
$sentiment = $textapi->Sentiment(array('text' => $text));
var_dump($sentiment);
?>
SentimentParams.Builder builder = SentimentParams.newBuilder();
builder.setText("John is a very good football player");
builder.setMode("tweet");
Sentiment sentiment = client.sentiment(builder.build());
System.out.println(sentiment);
text = 'John is a very good football player'

sentiment = client.sentiment(text: text)

puts sentiment
text := "John is a very good football player"
sentimentParams := &textapi.SentimentParams{Text: text, Mode: "tweet"}
sentiment, err := client.Sentiment(sentimentParams)
if err != nil {
    panic(err)
}
fmt.Printf("%v\n", sentiment)
string text = "John is a very good football player";

Sentiment sentiment = client.Sentiment(text: text);

Console.WriteLine(sentiment.Polarity + " " + sentiment.PolarityConfidence);
Console.WriteLine(sentiment.Subjectivity + " " + sentiment.SubjectivityConfidence);

Extracting sentiment from a piece of text such as a tweet, a review or an article can provide us with valuable insight about the author's emotions and perspective: whether the tone is positive, neutral or negative, and whether the text is subjective (meaning it's reflecting the author's opinion) or objective (meaning it's expressing a fact). Our Sentiment Analysis endpoint is built exactly for this purpose.

HTTP Request

  • GET https://api.aylien.com/api/v1/sentiment
  • POST https://api.aylien.com/api/v1/sentiment

Parameters

Sample response (JSON):

{
  "polarity":"positive",
  "subjectivity":"subjective",
  "text":"John is a very good football player",
  "polarity_confidence":0.9999936601153382,
  "subjectivity_confidence":0.9963778207617525
}
Parameter Data type Description Default
mode string tweet (for short text) or document (for long text and reviews) tweet
text string Text to analyze
url string URL to analyze
language string Language (refer to Language Support) auto

Entity Level Sentiment Analysis

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/elsa \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d text="Barcelona+is+an+awesome+destination"
textapi.entityLevelSentiment({
  text: 'Barcelona is an awesome destination'
}, function(error, response) {
  if (error === null) {
    console.log(response);
  }
});
text = 'Barcelona is an awesome destination'
elsa = client.Elsa({'text': text})
print(elsa)
<?php
$text = 'Barcelona is an awesome destination';
$elsa = $textapi->EntityLevelSentiment(array('text' => $text));
var_dump($elsa);
?>
EntityLevelSentimentParams.Builder builder = EntityLevelSentimentParams.newBuilder();
builder.setText("Barcelona is an awesome destination");
EntitiesSentiment elsa = client.entityLevelSentiment(builder.build());
System.out.println(elsa);
text = 'Barcelona is an awesome destination'

elsa = client.elsa(text: text)

puts elsa
text := "Barcelona is an awesome destination"
elsaParams := &textapi.ElsaParams{Text: text}
elsa, err := client.Elsa(elsaParams)
if err != nil {
    panic(err)
}
fmt.Printf("%v\n", elsa)
string text = "Barcelona is an awesome destination";

EntityLevelSentiment elsa = client.EntityLevelSentiment(text: text);

foreach (var entity in elsa.Entities)
{
  Console.WriteLine(entity.Mentions[0].Text + " is " + entity.Mentions[0].OverallSentiment.Polarity);
}

The Entity-level Sentiment Analysis (ELSA) endpoint provides the sentiment associated with entity mentioned in a document.

For every entity that is mentioned in a piece of text, ELSA will return:

  • a prediction of the sentiment polarity expressed toward that entity (positive, negative, or neutral)
  • the confidence score of the prediction
  • the entity’s type (for example Person, Organization, Location, Product)
  • DBPedia URIs where applicable

The ELSA endpoint is particularly useful for analyzing articles that reference multiple entities while expressing differing sentiments about each. You can use this to analyze how an entity is covered in a single document or across multiple documents.

The endpoint accepts both text and urls as parameters, and the only language currently supported is English.

HTTP Request

  • GET https://api.aylien.com/api/v1/elsa
  • POST https://api.aylien.com/api/v1/elsa

Parameters

Sample response (JSON):

 {
  "text": "The Sistine Chapel is beautiful, but Venice smells really bad",
  "entities": [{"links": [{"confidence": 1.17,
                           "provider": "dbpedia",
                           "types": ["http://dbpedia.org/ontology/Place",
                                     "http://schema.org/Place",
                                     "http://dbpedia.org/ontology/PopulatedPlace",
                                     "http://dbpedia.org/ontology/Location",
                                     "http://dbpedia.org/ontology/City",
                                     "http://dbpedia.org/ontology/Settlement"],
                           "uri": "http://dbpedia.org/resource/Venice"}],
                "mentions": [{"confidence": 1.0,
                              "offset": 37,
                              "sentiment": {"confidence": 0.45,
                                            "polarity": "negative"},
                              "text": "Venice"}],
                "overall_sentiment": {"confidence": 0.45,
                                      "polarity": "negative"},
                "type": "Location"},
               {"links": [{"confidence": 0.05,
                           "provider": "dbpedia",
                           "types": ["http://dbpedia.org/ontology/Place",
                                     "http://dbpedia.org/ontology/Museum",
                                     "http://dbpedia.org/ontology/Chapel",
                                     "http://schema.org/Place",
                                     "http://dbpedia.org/ontology/ReligiousBuilding",
                                     "http://dbpedia.org/ontology/ArchitecturalStructure",
                                     "http://dbpedia.org/ontology/Location",
                                     "http://dbpedia.org/ontology/Building"],
                           "uri": "http://dbpedia.org/resource/Sistine_Chapel"}],
                "mentions": [{"confidence": 1.0,
                              "offset": 4,
                              "sentiment": {"confidence": 0.6,
                              "polarity": "positive"},
                              "text": "Sistine Chapel"}],
                "overall_sentiment": {"confidence": 0.6,
                                      "polarity": "positive"},
                "type": "Location"}]
}
Parameter Data type Description Default
text string Text to analyze
url string URL to analyze

Aspect-Based Sentiment Analysis

Download and install SDKs from here.

curl "https://api.aylien.com/api/v1/absa/restaurants" \
  -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
  -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
  --data-urlencode "text=Delicious food. Disappointing service."
textapi.aspectBasedSentiment({
  'domain': 'restaurants',
  'text': 'Delicious food. Disappointing service'
}, function(err, response) {
  if (err === null) {
    response.aspects.forEach(function(aspect) {
      console.log(aspect);
    });
  }
});
text = "Delicious food. Disappointing service."
absa = client.AspectBasedSentiment({'domain': 'restaurants', 'text': text})
for aspect in absa['aspects']:
  print(aspect)
<?php
$text = 'Delicious food. Disappointing service.';
$absa = $textapi->AspectBasedSentiment(array('text' => $text, 'domain' => 'restaurants'));
foreach ($absa->aspects as $aspect) {
  var_dump($aspect);
}
?>
AspectBasedSentimentParams.Builder builder = AspectBasedSentimentParams.newBuilder();
builder.setDomain(AspectBasedSentimentParams.StandardDomain.RESTAURANTS);
builder.setText("Delicious food. Disappointing service.");
AspectsSentiment aspectsSentiment = client.aspectBasedSentiment(builder.build());
for (Aspect aspect: aspectsSentiment.getAspects()) {
  System.out.println(aspect);
}
for (AspectSentence sentence: aspectsSentiment.getSentences()) {
  System.out.println(sentence);
}
text = "Delicious food. Disappointing service."

response = client.aspect_based_sentiment(domain: "restaurants", text: text)

puts response[:aspects].join("\n")
params := &textapi.AspectBasedSentimentParams{
  Text:   "Delicious food. Disappointing service.",
  Domain: "restaurants",
}
sentiment, err := client.AspectBasedSentiment(params)
if err != nil {
  panic(err)
}
for _, a := range sentiment.Aspects {
  fmt.Printf("%v\n", a)
}
string text = "Delicious food. Disappointing service.";

AspectBasedSentiment aspectBasedSentiment = client.AspectBasedSentiment("restaurants", text: text);

foreach (var aspect in aspectBasedSentiment.Aspects)
{
  Console.WriteLine(aspect._Aspect + " is " + aspect.Polarity);
}

Certain types of documents, such as customer feedback or reviews, may contain fine-grained sentiment about different aspects of the entities (e.g. a product or service) that are mentioned in the document. For instance, a review about a hotel may contain opinionated sentences about its staff, beds and location. This information can be highly valuable for understanding customers' opinion about a particular service or product.

Using the Aspect-based Sentiment Analysis (ABSA) endpoint you can retrieve a list of aspects that are mentioned in a document belonging to a specific domain, and the sentiment of the author towards each of those aspects.

Supported Domains

The following values corresponding to different domains are currently supported, and accepted for the domain parameter:

  • "hotels"
  • "restaurants"
  • "cars"
  • "airlines"

HTTP Request

  • GET https://api.aylien.com/api/v1/absa/:domain
  • POST https://api.aylien.com/api/v1/absa/:domain

Parameters

Sample response (JSON):

{
  "text": "Delicious food. Disappointing service.",
  "domain": "restaurants",
  "aspects": [{
    "aspect": "food",
    "aspect_confidence": 0.9835863709449768,
    "polarity": "positive",
    "polarity_confidence": 0.9158669114112854
  }, {
    "aspect": "staff",
    "aspect_confidence": 0.9747142195701599,
    "polarity": "negative",
    "polarity_confidence": 0.9969394207000732
  }],
  "sentences": [{
    "text": "Delicious food.",
    "polarity": "positive",
    "polarity_confidence": 0.9158669114112854,
    "aspects": [{
      "aspect": "food",
      "aspect_confidence": 0.9835863709449768,
      "polarity": "positive",
      "polarity_confidence": 0.9158669114112854
    }]
  }, {
    "text": "Disappointing service.",
    "polarity": "negative",
    "polarity_confidence": 0.9969394207000732,
    "aspects": [{
      "aspect": "staff",
      "aspect_confidence": 0.9747142195701599,
      "polarity": "negative",
      "polarity_confidence": 0.9969394207000732
    }]
  }]
}
Parameter Data type Description Default
domain string The domain or industry that the text belongs to (e.g. Hotels)
text string Text to analyze
url string URL to analyze
language string Language (refer to Language Support) auto

Entity Extraction

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/entities \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d text="ACME+corp+was+founded+by+John+Smith+in+Chicago."
textapi.entities({
  text: 'ACME corp was founded by John Smith in Chicago.'
}, function(error, response) {
  if (error === null) {
    Object.keys(response.entities).forEach(function(e) {
      console.log(e + ": " + response.entities[e].join(","));
    });
  }
});
text = "ACME corp was founded by John Smith in Chicago."
entities = client.Entities({"text": text})
for type, values in entities['entities'].iteritems():
  print(type,', '.join(values))
<?php
$text = 'ACME corp was founded by John Smith in Chicago.';
$entities = $textapi->Entities(array('text' => $text));
foreach ($entities->entities as $type => $values) {
  printf($type . ": " . implode(', ', $values) . "\n");
}
?>
EntitiesParams.Builder builder = EntitiesParams.newBuilder();
String text = "ACME corp was founded by John Smith in Chicago.";
builder.setText(text);
Entities entities = client.entities(builder.build());
for (Entity entity: entities.getEntities()) {
    System.out.print(entity.getType() + ": ");
    for (String sf: entity.getSurfaceForms()) {
        System.out.print("\"" + sf + "\" ");
    }
    System.out.println();
}
text = "ACME corp was founded by John Smith in Chicago."

response = client.entities(text: text)

response[:entities].each do |type, values|
  puts "#{type}, #{values}"
end
text := "ACME corp was founded by John Smith in Chicago."
entitiesParams := &textapi.EntitiesParams{Text: text}
entities, err := client.Entities(entitiesParams)
if err != nil {
    panic(err)
}
for k, v := range entities.Entities {
    fmt.Printf("%s\t%v\n", k, v)
}
string text = "ACME corp was founded by John Smith in Chicago.";

var entities = client.Entities(text: text).EntitiesMember;

Console.WriteLine(string.Join(", ", entities.Location));
Console.WriteLine(string.Join(", ", entities.Keyword));
Console.WriteLine(string.Join(", ", entities.Organization));
Console.WriteLine(string.Join(", ", entities.Person));

Documents often contain mentions of entities such as people, places, products and organizations, which we collectively call Named Entities. Additionally they may also contain specific values or items such as links, telephone numbers, email addresses, currency amounts and percentages. To extract these entities and values from a piece of text, as well as the keywords, you can use the Entity Extraction endpoint.

Entity Extraction looks at the structural patterns in a document to find and extract entities, and therefore can be error-prone.

HTTP Request

  • GET https://api.aylien.com/api/v1/entities
  • POST https://api.aylien.com/api/v1/entities

Parameters

Sample response (JSON):

{
  "text":"ACME corp was founded by John Smith in Chicago.",
  "language":"en",
  "entities":{
    "location":[
      "Chicago"
    ],
    "keyword":[
      "John",
      "corp",
      "Smith",
      "Chicago",
      "ACME"
    ],
    "organization":[
      "ACME"
    ],
    "person":[
      "John Smith"
    ]
  }
}
Parameter Data type Description Default
text string Text to analyze
url string URL to analyze
language string Language (refer to Language Support) auto

Concept Extraction

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/concepts \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d text="Apple+was+founded+by+Steve+Jobs,+Steve+Wozniak+and+Ronald+Wayne."
textapi.concepts({
  text: 'Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne.'
}, function(error, response) {
  if (error === null) {
    Object.keys(response.concepts).forEach(function(concept) {
      var surfaceForms = response.concepts[concept].surfaceForms.map(function(sf) {
        return sf['string'];
      });
      console.log(concept + ": " + surfaceForms.join(","));
    });
  }
});
text = "Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne."
concepts = client.Concepts({"text": text})
for uri, value in concepts['concepts'].iteritems():
  sfs = map(lambda c: c['string'], value['surfaceForms'])
  print(uri,', '.join(sfs))
<?php
$text = 'Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne.';
$concepts = $textapi->Concepts(array('text' => $text));
foreach ($concepts->concepts as $uri => $value) {
  $surfaceForms = array_map(function($sf) {
    return $sf->string;
  }, $value->surfaceForms);
  printf("$uri\t" . implode(",", $surfaceForms) . "\n");
}
?>
ConceptsParams.Builder builder = ConceptsParams.newBuilder();
String text = "Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne.";
builder.setText(text);
Concepts concepts = client.concepts(builder.build());
for (Concept concept: concepts.getConcepts()) {
  System.out.println(concept);
}
text = "Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne."

response = client.concepts(text: text)

response[:concepts].each do |concept, value|
  surface_forms = value[:surfaceForms].map { |c| c[:string] }.join(', ')
  puts "#{concept}:#{surface_forms}"
end
text := "Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne."
conceptsParams := &textapi.ConceptsParams{Text: text}
concepts, err := client.Concepts(conceptsParams)
if err != nil {
    panic(err)
}
for k, v := range concepts.Concepts {
    fmt.Printf("%s\t%v\n", k, v.SurfaceForms)
}
string text = "Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne.";

Concepts concepts = client.Concepts(text: text);

foreach (var concept in concepts.ConceptsMember)
{
  Console.WriteLine(concept.Key);
}

The Concept Extraction endpoint extracts different types of notable entities from a document, using Wikipedia (and potentially other knowledge bases) as context. It also taps into Linked Open Data to provide structured data around the extracted entities, such as LOD URIs which can be used to retrieve additional information about an entity such as a person's height or a company's stock price, as well as semantic types of an entity (DBpedia, Schema.org, etc.) which can be used for filtering entities by their type.

N.B. We recommend using both Entity and Concept Extraction together if you're looking to extract well-known entities with higher precision. See Entity Extraction.

HTTP Request

  • GET https://api.aylien.com/api/v1/concepts
  • POST https://api.aylien.com/api/v1/concepts

Parameters

Sample response (JSON):

{
  "text":"Apple was founded by Steve Jobs, Steve Wozniak and Ronald Wayne.",
  "language":"en",
  "concepts":{
    "http://dbpedia.org/resource/Apple_Inc.":{
      "surfaceForms":[
        {
          "string":"Apple",
          "score":0.9994597361117074,
          "offset":0
        }
      ],
      "types":[
        "http://www.wikidata.org/entity/Q43229",
        "http://schema.org/Organization",
        "http://dbpedia.org/ontology/Organisation",
        "http://dbpedia.org/ontology/Company"
      ],
      "support":10626
    }
  }
}
Parameter Data type Description Default
text string Text to analyze
url string URL to analyze
language string Language (refer to Language Support) auto

Summarization

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/summarize \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d sentences_number=3 \
   -d url="http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate"
textapi.summarize({
  url: 'http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate',
  sentences_number: 3
}, function(error, response) {
  if (error === null) {
    response.sentences.forEach(function(s) {
      console.log(s);
    });
  }
});
url = 'http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate'
summary = client.Summarize({'url': url, 'sentences_number': 3})
for sentence in summary['sentences']:
  print(sentence)
<?php
$url = 'http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate';
$summary = $textapi->Summarize(array('url' => $url, 'sentences_number' => 3));
foreach ($summary->sentences as $sentece) {
  echo $sentece,"\n";
}
?>
SummarizeParams.Builder builder = SummarizeParams.newBuilder();
java.net.URL url = new java.net.URL("http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate");
builder.setUrl(url);
builder.setNumberOfSentences(3);
Summarize summary = client.summarize(builder.build());
for (String sentence: summary.getSentences()) {
  System.out.println(sentence);
}
url = 'http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate'

summary = client.summarize(url: url, sentences_number: 3)

summary[:sentences].each do |sentence|
  puts sentence
end
url := "http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate"
summarizeParams := &textapi.SummarizeParams{URL: url, NumberOfSentences: 3}
summary, err := client.Summarize(summarizeParams)
if err != nil {
    panic(err)
}
for _, s := range summary.Sentences {
    fmt.Println(s)
}
string url = "http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate";

var summary = client.Summarize(url: url, sentencesNumber: 3).Sentences;

foreach (var sentence in summary)
{
  Console.WriteLine(sentence);
}

The Summarization endpoint provides an easy way of summarizing a document such as a news article or blog post into a few key sentences. You can specify the length of the summary via the sentences_number or sentences_percentage parameters.

HTTP Request

  • GET https://api.aylien.com/api/v1/summarize
  • POST https://api.aylien.com/api/v1/summarize

Parameters

Sample response (JSON):

{
  "sentences":[
    "It’s been almost two years since the world was captivated by Snowden’s leaks to The Guardian and The Washington Post about American surveillance programs.",
    "In response to the public outcry that followed the Snowden revelations, President Obama stipulated that congress must renew or reform the Patriot Act provision authorizing the bulk collection of Americans’ phone records by that date, or else the program will expire.",
    "Snowden then went on to explain how the government uses different programs to access those pictures, from Executive Order 12333 to Section 702 of the Foreign Intelligence Surveillance Act."
  ],
  "text":"For many Americans who talked to John Oliver on Last Week Tonight, the answer is no..."
}
Parameter Data type Description Default
url string Article or webpage URL
title string Title of the text to summarize
text string Text to summarize
sentences_number integer Summary length as number of sentences 5
sentences_percentage integer Summary length as percentage of original document
language string Language (refer to Language Support) auto

Article Extraction

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/extract \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d best_image=true \
   -d url="http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate"
textapi.extract({
  url: 'http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate',
  best_image: true
}, function(error, response) {
  if (error === null) {
    console.log(response);
  }
});
url = "http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate"
extract = client.Extract({"url": url, "best_image": True})
print(extract)
<?php
$url = 'http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate';
$extract = $textapi->Extract(array('url' => $url, 'best_image' => 'true'));
var_dump($extract);
?>
ExtractParams.Builder builder = ExtractParams.newBuilder();
java.net.URL url = new java.net.URL("http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate");
builder.setUrl(url);
builder.setBestImage(true);
Article extract = client.extract(builder.build());
System.out.println(extract);
url = "http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate"

extract = client.extract(url: url, best_image: true)

puts extract
url := "http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate"
extractParams := &textapi.ExtractParams{URL: url, BestImage: true}
article, err := client.Extract(extractParams)
if err != nil {
  panic(err)
}
fmt.Printf("%v\n", article)
string url = "http://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate";

var extract = client.Extract(url: url, bestImage: true);

Console.WriteLine("Title: " + extract.Title);
Console.WriteLine("Author: " + extract.Author);

If you are dealing with webpages and articles, chances are the text you'd like to analyze is surrounded by some 'clutter' such as site navigation or ads. In order to get accurate results in your text analysis, you might want to remove such clutter and extract the main text of the webpage or article. Article Extraction allows you to do that, and in addition to removing clutter, also helps you extract the following information:

  • Title: raw title of the webpage or article
  • Article: full text of the webpage or article
  • Author: name of the author
  • Image: the main image on the webpage or article
  • Videos: an array of videos embedded in the webpage or article
  • Feeds: an array of RSS feeds found on the webpage or article
  • Publish Date: publish date of the article
  • Keywords: an array of keywords extracted from the webpage

HTTP Request

  • GET https://api.aylien.com/api/v1/extract
  • POST https://api.aylien.com/api/v1/extract

Parameters

Sample response (JSON):

{
    "article": "Remember Edward Snowden?\r\n\r\nFor many Americans who talked to John Oliver\u00a0on Last Week Tonight, the answer is no...",
    "author": "Cat Zakrzewski",
    "feeds": [
        "https://techcrunch.com/feed/",
        "https://techcrunch.com/comments/feed/",
        "https://techcrunch.com/2015/04/06/john-oliver-just-changed-the-surveillance-reform-debate/feed/"
    ],
    "image": "",
    "keywords": [],
    "publishDate": "2015-04-06T17:45:49+00:00",
    "title": "John Oliver Just Changed The Surveillance Reform Debate",
    "videos": [
        "https://www.youtube.com/embed/XEVlyP4_11M?version=3&rel=1&fs=1&autohide=2&showsearch=0&showinfo=1&iv_load_policy=1&wmode=transparent"
    ]
}
Parameter Data type Description Default
url string Article or webpage URL
html string Raw HTML to extract text from
best_image string Whether or not the API should try to extract the best image (might affect processing time) false
language string Language (refer to Language Support) auto

Image Tagging

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/image-tags \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d url="https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg"
textapi.imageTags({
  url: 'https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg'
}, function(error, response) {
  if (error === null) {
    console.log(response.tags);
  }
});
url = "https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg"
imageTags = client.ImageTags({"url": url})
for tag in imageTags['tags']:
  print(tag)
<?php
$url = 'https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg';
$imageTags = $textapi->ImageTags(array('url' => $url));
var_dump($imageTags->tags);
?>
ImageTagsParams.Builder builder = ImageTagsParams.newBuilder();
java.net.URL url = new java.net.URL("https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg");
builder.setUrl(url);
ImageTags imageTags = client.imageTags(builder.build());
for (ImageTag tag: imageTags.getTags()) {
    System.out.println(tag.getName() + " (" + tag.getConfidence() + ")");
}
url = "https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg"

image_tags = client.image_tags(url: url)

image_tags[:tags].each do |tag|
  puts tag
end
url := "https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg"
imageTagsParams := &textapi.ImageTagsParams{URL: url}
imageTags, err := client.ImageTags(imageTagsParams)
if err != nil {
    panic(err)
}
for _, t := range imageTags.Tags {
    fmt.Printf("%s: %f\n", t.Tag, t.Confidence)
}
string url = "https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg";

var imageTags = client.ImageTags(url: url);

foreach (var tag in imageTags.Tags)
{
  Console.WriteLine(tag.name);
}

The Image Tagging endpoint identifies common shapes, objects and concepts in an image and returns them as a list of tags along with a confidence score which indicates how confident the system is about the assignment.

HTTP Request

  • GET https://api.aylien.com/api/v1/image-tags
  • POST https://api.aylien.com/api/v1/image-tags

Parameters

Sample response (JSON):

{
  "image-tags":{
    "image":"https://c1.staticflickr.com/5/4112/5170590074_714d36db83_b.jpg",
    "tags":[
      {
        "tag":"retriever",
        "confidence":1
      },
      {
        "tag":"dog",
        "confidence":0.5238458474035048
      },
      {
        "tag":"puppy",
        "confidence":0.491276660821357
      },
      {
        "tag":"golden retriever",
        "confidence":0.45781040296243847
      },
      {
        "tag":"sporting dog",
        "confidence":0.3530136438969765
      }
    ]
  }
}
Parameter Data type Description Default
url string URL of the image

Hashtag Suggestion

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/hashtags \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d url="http://www.bbc.com/sport/0/football/25912393"
textapi.hashtags({
  url: 'http://www.bbc.com/sport/0/football/25912393'
}, function(error, response) {
  if (error === null) {
    console.log(response.hashtags);
  }
});
url = "http://www.bbc.com/sport/0/football/25912393"
hashtags = client.Hashtags({"url": url})
print(', '.join(hashtags['hashtags']))
<?php
$url = 'http://www.bbc.com/sport/0/football/25912393';
$hashtags = $textapi->Hashtags(array('url' => $url));
echo implode(', ', $hashtags->hashtags);
?>
HashTagsParams.Builder builder = HashTagsParams.newBuilder();
java.net.URL url = new java.net.URL("http://www.bbc.com/sport/0/football/25912393");
builder.setUrl(url);
HashTags hashTags = client.hashtags(builder.build());
for (String hashTag: hashTags.getHashtags()) {
    System.out.println(hashTag);
}
url = "http://www.bbc.com/sport/0/football/25912393"

response = client.hashtags(url: url)

puts response[:hashtags].join(', ')
url := "http://www.bbc.com/sport/0/football/25912393"
hashtagsParams := &textapi.HashtagsParams{URL: url}
hashtags, err := client.Hashtags(hashtagsParams)
if err != nil {
    panic(err)
}
for _, h := range hashtags.Hashtags {
    fmt.Printf("%s\t", h)
}
string url = "http://www.bbc.com/sport/0/football/25912393";

var hashtags = client.Hashtags(url: url);

Console.WriteLine(string.Join(", ", hashtags.HashtagsMember));

Hashtags have become a popular way of tagging content on Social Media, and attaching hashtags to a piece of content can dramatically increase its visibility on various Social Networking platforms such as Facebook, Twitter, Google+, Instagram and LinkedIn. Using Hashtag Suggestion, you can automatically generate a list of highly-relevant hashtags that will help you get more exposure for your content on Social Media.

HTTP Request

  • GET https://api.aylien.com/api/v1/hashtags
  • POST https://api.aylien.com/api/v1/hashtags

Parameters

Sample response (JSON):

{
  "language":"en",
  "hashtags":[
    "#LionelMessi",
    "#FCBarcelona",
    "#France",
    "#ParisSaintGermainFC",
    "#GerardoMartino",
    "#RAC1",
    "#ArgentinaNationalFootballTeam",
    "#JosepMariaBartomeu"
  ],
  "text":"Messi not for sale - Barca president\nLionel Messi: Forward is not for sale, says Barcelona president\n\nBarcelona forward Lionel Messi is not for sale and the club plan to discuss a new contract with the Argentine, says president Josep Maria Bartomeu..."
}
Parameter Data type Description Default
text string Text to analyze
url string URL to analyze
language string Language (refer to Language Support) auto

Language Detection

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/language \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   -d text="Hablas%2Bespa%C3%B1ol%3F"
textapi.language({
  text: "Hablas español?"
}, function(error, response) {
  if (error === null) {
    console.log(response);
  }
});
language = client.Language({"text": "Hablas español?"})
print(language)
<?php
$language = $textapi->Language(array('text' => 'Hablas español?'));
var_dump($language);
?>
LanguageParams.Builder builder = LanguageParams.newBuilder();
builder.setText("Hablas español?");
Language language = client.language(builder.build());
System.out.println(language);
language = client.language(text: "Hablas español?")
puts language
text := "Hablas español?"
languageParams := &textapi.LanguageParams{Text: text}
language, err := client.Language(languageParams)
if err != nil {
    panic(err)
}
fmt.Printf("%v\n", language)
string text = "Hablas español?";

var lang = client.Language(text: text);

Console.WriteLine(lang.Lang);

Language Detection detect the language of any text or URL swiftly and accurately, and returns it as an ISO 639-1 language code.

HTTP Request

  • GET https://api.aylien.com/api/v1/language
  • POST https://api.aylien.com/api/v1/language

Parameters

Sample response (JSON):

{
  "text":"Hablas+español?",
  "lang":"es",
  "confidence":0.9999981087029495
}
Parameter Data type Description Default
text string Text to analyze
url string URL to analyze

Combined Calls

Download and install SDKs from here.

curl https://api.aylien.com/api/v1/combined \
   -H "X-AYLIEN-TextAPI-Application-Key: YOUR_APP_KEY" \
   -H "X-AYLIEN-TextAPI-Application-ID: YOUR_APP_ID" \
   --data-urlencode "url=http://www.bbc.com/news/technology-33764155" \
   --data-urlencode "endpoint=entities" \
   --data-urlencode "endpoint=concepts" \
   --data-urlencode "endpoint=classify"
textapi.combined({
  "url": "http://www.bbc.com/news/technology-33764155",
  "endpoint": ["entities", "concepts", "classify"]
}, function(err, result) {
  if (err === null) {
    result.results.forEach(function(r) {
      console.log(r.endpoint + ':');
      console.log(r.result);
    });
  } else {
    console.log(err)
  }
});
combined = c.Combined({
  'url': "http://www.bbc.com/news/technology-33764155",
  'endpoint': ["entities", "concepts", "classify"]
})

for result in combined["results"]:
  print(result["endpoint"])
  print(result["result"])
<?php
$combined = $textapi->Combined(array(
  'url' => 'http://www.bbc.com/news/technology-33764155',
  'endpoint' => array("entities", "concepts", "classify")
));
foreach($combined->results as $result) {
  echo $result->endpoint,"\n";
  var_dump($result->result);
}
?>
CombinedParams.Builder builder = CombinedParams.newBuilder();
String[] endpoints = {"entities", "concepts", "classify"};
URL url = new URL("http://www.bbc.com/news/technology-33764155");
builder.setUrl(url);
builder.setEndpoints(endpoints);
Combined combined = client.combined(builder.build());
for (Entity entity: combined.getEntities().getEntities()) {
  System.out.println(entity.getType() + ": ");
  for (String sf: entity.getSurfaceForms()) {
    System.out.println("\"" + sf + "\" ");
  }
}
for (Category category: combined.getClassifications().getCategories()) {
  System.out.println(category);
}
url = "http://www.bbc.com/news/technology-33764155"
endpoints = ["entities", "concepts", "classify"]

combined = client.combined(url: url, endpoint: endpoints)

combined[:results].each do |result|
  puts result[:endpoint]
  puts result[:result]
end
endpoints := []string{"entities", "concepts", "classify"}
combinedParams := &textapi.CombinedParams{
  URL: "http://www.bbc.com/news/technology-33764155",
  Endpoints: endpoints,
}
result, err := client.Combined(combinedParams)
if err != nil {
  panic(err)
}
fmt.Printf("%v\n", result.Entities)
fmt.Printf("%v\n", result.Concepts)
fmt.Printf("%v\n", result.Classifications)
var endpoints = new string[] { "entities", "concepts", "classify" };
var url = "http://www.bbc.com/news/technology-33764155";

var combined = client.Combined(url: url, endpoints: endpoints);

Console.WriteLine(string.Join(", ", combined.Entities.EntitiesMember.Keyword));

foreach (var item in combined.Results)
{
  Console.WriteLine(item.Endpoint);
}

Combined Calls allows you to perform multiple analysis operations on the same input (text or URL) with a single API call. This is a helper method with the benefit of saving your application from having to make multiple API calls, and can reduce the number of network round trips between your application and our servers, resulting in a lower overall analysis time.

To use the Combined Calls endpoint you must supply the list of endpoints that you wish to be applied to your input. Currently the following endpoints are available:

Endpoint name Combined Call name
Aspect-Based Sentiment Analysis absa/DOMAIN_NAME e.g. "absa/hotels"
Sentiment Analysis sentiment
Classification by Taxonomy classify/TAXONOMY_NAME e.g. "classify/iptc-subjectcode"
Article Extraction extract
Summarization summarize
Concept Extraction concepts
Entity Extraction entities
Language Detection language
Hashtag Suggestion hashtags

HTTP Request

  • GET https://api.aylien.com/api/v1/combined
  • POST https://api.aylien.com/api/v1/combined

Parameters

Sample response (JSON):

{
  "results":[
    {
      "endpoint":"entities",
      "result":{
        "entities":{
          "keyword":[
            "internet servers",
            "flaw in the internet",
            "internet users",
            "server software",
            "exploits of the flaw",
            "internet",
            "System (DNS) software",
            "servers",
            "flaw",
            "expert",
            "vulnerability",
            "systems",
            "software",
            "exploits",
            "users",
            "websites",
            "addresses",
            "offline",
            "URLs",
            "services"
          ],
          "organization":[
            "DNS",
            "BBC"
          ],
          "person":[
            "Daniel Cid",
            "Brian Honan"
          ]
        },
        "language":"en"
      }
    },
    {
      "endpoint":"concepts",
      "result":{
        "concepts":{
          "http://dbpedia.org/resource/Apache_HTTP_Server":{
            "support":503,
            "surfaceForms":[
              {
                "offset":1314,
                "score":1.0,
                "string":"Apache"
              }
            ],
            "types":[
              "http://dbpedia.org/ontology/Software"
            ]
          },
          "http://dbpedia.org/resource/BBC_News":{
            "support":2062,
            "surfaceForms":[
              {
                "offset":1161,
                "score":0.8707235306716345,
                "string":"BBC"
              }
            ],
            "types":[
              "http://dbpedia.org/ontology/Company"
            ]
          },
          "http://dbpedia.org/resource/Denial-of-service_attack":{
            "support":620,
            "surfaceForms":[
              {
                "offset":317,
                "score":1.0,
                "string":"denial-of-service attacks"
              }
            ],
            "types":[
              ""
            ]
          },
          "http://dbpedia.org/resource/Domain_Name_System":{
            "support":1437,
            "surfaceForms":[
              {
                "offset":495,
                "score":1.0,
                "string":"Domain Name System"
              },
              {
                "offset":515,
                "score":0.9999999999599822,
                "string":"DNS"
              }
            ],
            "types":[
              ""
            ]
          },
          "http://dbpedia.org/resource/Internet_Systems_Consortium":{
            "support":45,
            "surfaceForms":[
              {
                "offset":818,
                "score":1.0,
                "string":"Internet Systems Consortium"
              }
            ],
            "types":[
              "http://dbpedia.org/ontology/Non-ProfitOrganisation"
            ]
          },
          "http://dbpedia.org/resource/OpenSSL":{
            "support":247,
            "surfaceForms":[
              {
                "offset":1322,
                "score":1.0,
                "string":"OpenSSL"
              }
            ],
            "types":[
              "http://dbpedia.org/ontology/Software"
            ]
          }
        },
        "language":"en"
      }
    },
    {
      "endpoint":"classify",
      "result":{
        "categories":[
          {
            "code":"04003005",
            "confidence":1.0,
            "label":"computing and information technology - software"
          }
        ],
        "language":"en"
      }
    }
  ],
  "text":"Hackers target internet address bug to disrupt sites\nHackers are exploiting a serious flaw in the internet's architecture, according to a security firm.\n\nThe bug targets systems which convert URLs into IP addresses.\n\nExploiting it could threaten the smooth running of internet services as it allows hackers to launch denial-of-service attacks on websites, potentially forcing them offline.\n\nRegular internet users are unlikely to be severely affected, however.\n\nBind is the name of a variety of Domain Name System (DNS) software used on the majority of internet servers.\n\nThe recently identified bug allows attackers to crash the software, therefore taking the DNS service offline and preventing URLs, for example, from working.\n\nA patch for the flaw is already available, but many systems are yet to be updated.\n\nThe Internet Systems Consortium (ISC), which develops Bind, said in a tweet that the vulnerability was \"particularly critical\" and \"easily exploited\".\n\nDaniel Cid, a networking expert at Sucuri has published a blog post on the vulnerability in which he explained that real exploits taking advantage of the flaw have already happened.\n\nHe told the BBC: \"A few of our clients, in different industries, had their DNS servers crashed because of it.\n\n\"Based on our experience, server software, like Bind, Apache, OpenSSL and others, do not get patched as often as they should.\"\n\nCybersecurity expert Brian Honan commented that a spike in exploits of the flaw was expected over the next few days.\n\nHowever, he added that websites would often still be accessible via other routes and cached addresses on DNS servers around the world, even when certain key DNS servers have been made to crash.\n\n\"It's not a doomsday scenario, it's a question of making sure the DNS structure can continue to work while patches are rolled out,\" he said.\n\nThe impact on general internet users is likely to be minimal, according to Mr Cid.\n\n\"Average internet users won't feel much pain, besides a few sites and email servers down,\" he said."
}
Parameter Data type Description Default
text string Text to analyze
url string URL to analyze
endpoint string Analysis to performs (repeat for multiple analyses)
language string Language of the input text or URL en