2016-10 Archives - DBpedia Association https://www.dbpedia.org/2016-10/ Global and Unified Access to Knowledge Graphs Thu, 03 Feb 2022 08:11:52 +0000 en-GB hourly 1 https://wordpress.org/?v=6.4.3 https://www.dbpedia.org/wp-content/uploads/2020/09/cropped-dbpedia-webicon-32x32.png 2016-10 Archives - DBpedia Association https://www.dbpedia.org/2016-10/ 32 32 New DBpedia Usage Report for July 2017 through January 2021 https://www.dbpedia.org/blog/new-dbpedia-usage-report-2021/ Fri, 13 Aug 2021 08:08:35 +0000 https://www.dbpedia.org/?p=4906 Our partner OpenLink Software recently published a new DBpedia usage report on the SPARQL endpoint and associated Linked Data deployment. Copyright © 2021 OpenLink Software Introduction This document shows some of the statistics from the DBpedia 2016-10 dataset collected between July 2017 and January 2021; spanning more than three and a half year of logs […]

The post New DBpedia Usage Report for July 2017 through January 2021 appeared first on DBpedia Association.

]]>
Our partner OpenLink Software recently published a new DBpedia usage report on the SPARQL endpoint and associated Linked Data deployment.

Copyright © 2021 OpenLink Software

Introduction

This document shows some of the statistics from the DBpedia 2016-10 dataset collected between July 2017 and January 2021; spanning more than three and a half year of logs from the DBpedia web service operated by OpenLink Software at http://dbpedia.org/sparql/ .

The log files used to prepare this document include data from the following DBpedia release:

Infrastructure

The DBpedia service consists of:

  • two or more Virtuoso Universal Server Instances — facilitating Linked Data Deployment including providing a SPARQL endpoint delivering RDF data in a variety of document formats subject to content-negotiation.
  • Reverse Proxy Server — which redirects client requests to an available Virtuoso instance and caches the results in case another client repeats the same request within a specified timeframe
  • a physical computer — hosted in OpenLink Software’s datacenter

Currently the DBpedia service is hosted on two virtual machines running CentOS 6, each using 8 Intel Xeon E5–2630 2.30 GHz cores with 200 GB SSD and 64GB memory, hosting Virtuoso 7.2 Enterprise Edition with the Column Store Module.

Rate and Connection limits

To maintain equitable access to the DBpedia service for everyone, OpenLink Software limits connections by rate and concurrent connection, limiting disruption by faulty or misbehaving applications.

Current limit rates are:

  • Connection limit of 50 parallel connections per IP address . This number is fairly high to permit multiple clients in networks using Network Address Translation (NAT) to appear as one network IP. Without the use of tracking cookies, it is impossible to distinguish between machines inside a NAT network, and for privacy and legal reasons, OpenLink Software has decided not to use such cookies at this point in time.
  • Rate limit of 100 requests per second per IP address, with an initial burst of 120 requests.

As part of monitoring the DBpedia service, OpenLink Software performs frequent traffic analysis to make sure the service is running smoothly.

Ideally, applications should be written to check the HTTP status code of each request, and in case of a 503 (Service Unavailable) or 429 (Too Many Requests) code, perform a 1–2 second sleep before retrying the request.

OpenLink Software may alter these parameters at any time to make sure the service remains reachable to the general public.

In case of misuse, OpenLink Software may temporarily block an offender’s IP address from accessing the DBpedia service. This temporary ban will be automatically lifted once such a blocked IP address refrains from making any request to the DBpedia service for at least 5 minutes.

Configured Virtuoso limits on the DBpedia endpoint

The Virtuoso configuration for the DBpedia endpoint includes:

  • Query Execution Timeout of 120 seconds. This is the query solution preparation threshold. If the timeout stops execution before the solution is complete — i.e., if the solution is partial — this is indicated to the query client via HTTP response headers.
  • Maximum SPARQL query solution (aka result set) size of 10,000 rows. This is the maximum number of solution rows (for SELECT queries) or triple/quad statements (for CONSTRUCT or DESCRIBE queries) returned per query-solution-retrieval round-trip.

Virtuoso “Anytime Query” Functionality

The Anytime Query is a core feature of Virtuoso that enables it to handle the challenges inherent in providing a publicly accessible interface for ad-hoc querying at Web scale. This feature allows an application compliant with the SPARQL- and HTTP-protocol to issue long-running and/or large-solution queries, for which finding the complete solution would exceed configured query timeout and/or result set limits, and rather than being rebuffed with no solution, to receive partial solutions conforming to those thresholds. Further, this feature enables the use of LIMIT and OFFSET (typically combined with ORDER BY and/or GROUP BY) to create windows (also known as sliding windows or cursors ) to iterate through the complete query solution without being adversely affected by inserts or deletions.

Note: Even while paging through a partial query solution, Virtuoso continues to work towards a complete solution in the background.

Custom HTTP headers

As the W3C SPARQL standard currently does not specify an authoritative status code or header response to report a partial result set, OpenLink Software has opted to have Virtuoso return a status code of 200 to denote a successful request and add a custom header to the result to indicate that the result was limited to what could be returned within the settings enforced by the server.

If full execution of the query would return more than the configured maximum number of rows, the X-SPARQL-MaxRows line is added, as shown below:

HTTP/1.1 200 OK
Date: Tue, 1 Jan 2018 12:00:00 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 1427536
Connection: keep-alive
Vary: Accept-Encoding
Server: Virtuoso/07.20.3224 (Linux) i686-generic-linux-glibc212-64 VDB
X-SPARQL-default-graph: http://dbpedia.org
X-SPARQL-MaxRows: 10000
Expires: Tue, 07 Jan 2018 12:00:00 GMT
Cache-Control: max-age=604800
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: HEAD, GET, POST, OPTIONS
Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Accept-Encoding
Accept-Ranges: bytes

If the AnyTime Query timeout is reached, several headers are added:

HTTP/1.1 200 OK
Date: Tue, 01 Jan 2018 12:00:00 GMT
Content-Type: text/html; charset=UTF-8
Content-Length: 80
Connection: keep-alive
Server: Virtuoso/07.20.3224 (Linux) i686-generic-linux-glibc212-64 VDB
X-SPARQL-default-graph: http://dbpedia.org
X-SQL-State: S1TAT
X-SQL-Message: RC...: Returning incomplete results, query interrupted by result timeout. Activity: 7 rnd 64.87M seq 0 same seg 1 same pg 0 same par 0 disk 0 spec disk 0B / 0 mess
X-Exec-Milliseconds: 30000
X-Exec-DB-Activity: 7 rnd 64.87M seq 0 same seg 1 same pg 0 same par 0 disk 0 spec disk 0B / 0 messages 0 fork
Expires: Tue, 07 Jan 2018 12:00:00 GMT
Cache-Control: max-age=604800
Access-Control-Allow-Origin: *
Access-Control-Allow-Credentials: true
Access-Control-Allow-Methods: HEAD, GET, POST, OPTIONS
Access-Control-Allow-Headers: DNT,X-CustomHeader,Keep-Alive,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Accept-Encoding
Accept-Ranges: bytes

Hosting Independent DBpedia Instances

The restrictions described above may impair some complex analytical queries. Users who frequently encounter these limits are advised to use one of the following methods:

HTTP logs

The HTTP server log files used in this report exclude traffic generated by:

  • IP addresses that were temporarily rate-limited after their burst period
  • IP addresses that were banned after misuse
  • applications, spiders, and other crawlers that were blocked after frequently hitting the rate-limiter or which generally claimed too many resources

The system uses a combination of firewall rules and Access Control Lists (ACLs) to quickly drop such connections, so legitimate users of the DBpedia service can continue to connect and execute queries.

To save time, these dropped connections are not recorded in the log files.

The data for this document was extracted from reports generated by Webalizer v2.21.

HTTP Usage Historical Overview

The first table shows the average numbers of Visits and Hits per day during the time each DBpedia dataset was was live on the http://dbpedia.org/sparql endpoint.

DBpediaFromUntilDaysVisits per dayHits per dayTotal Hits
3.32009-06-302009-11-051289,602733,81194,661,592
3.42009-11-062010-04-0715211,1001,212,549185,519,930
3.52010-04-082011-01-1728416,3811,122,612282,898,279
3.62011-01-182011-06-3016319,2881,328,355219,178,587
3.72011-07-012012-06-1935423,4082,052,660594,338,675
3.82012-06-202013-09-1945616,6142,925,335570,440,410
3.92013-09-202014-09-0234722,0263,035,4281,062,399,840
20142014-09-032015-07-0530527,9273,423,4901,051,011,401
2015-042015-07-062016-03-3126924,6893,516,936953,089,788
2015-102016-04-012016-10-13195110,7456,581,2171,263,593,686
2016-042016-10-142017-07-03262231,7357,646,4472,003,369,014
2016-102017-07-042021-01-071283257.9947,542,6239.501.427.081

For detailed information on the specific usage numbers, please visit the original report by OpenLink Software published here. Also, older reports are available through their site. Read the previous usage report 2020 on the DBpedia blog.

Further Links

For the latest news, subscribe to the DBpedia Newsletter, check our DBpedia Website and follow us on Twitter or LinkedIn .

Thanks for reading and keep using DBpedia!

Yours DBpedia Associaton

The post New DBpedia Usage Report for July 2017 through January 2021 appeared first on DBpedia Association.

]]>
Keep using DBpedia! https://www.dbpedia.org/blog/keep-using-dbpedia/ Thu, 08 Feb 2018 14:17:31 +0000 http://blog.dbpedia.org/?p=693 Just recently, DBpedia Association member and hosting specialist, OpenLink released the DBpedia Usage report, a periodic report on the DBpedia SPARQL endpoint and associated Linked Data deployment. The report not only gives some historical insight into DBpedia’s usage, number of visits and hits per day but especially shows statistics collected between October 2016 and December […]

The post Keep using DBpedia! appeared first on DBpedia Association.

]]>
Just recently, DBpedia Association member and hosting specialist, OpenLink released the DBpedia Usage report, a periodic report on the DBpedia SPARQL endpoint and associated Linked Data deployment.

The report not only gives some historical insight into DBpedia’s usage, number of visits and hits per day but especially shows statistics collected between October 2016 and December 2017. The report covers more than a year of logs from the DBpedia web service operated by OpenLink Software at http://dbpedia.org/sparql/.  

Before we want to highlight a few aspects of DBpedia’s usage we would like to thank Open Link for the continuous hosting of the DBpedia Endpoint and the creation of this report

The graph shows the average number of hits/requests per day that were made to the DBpedia service during each of the releases.

The graph shows the average number of unique visits per day made to the DBpedia service during each of the datasets.

Speaking of which, as you can see in the following tables, there has been a massive increase in the number of hits coinciding with the DBpedia 2015–10 release on April 1st, 2016.

 

 

 

 

This boost can be attributed to an intensive promotion of DBpedia via community meetings, communication with various partners in the Linked Data community and Social media presence among the community, in order to increase backlinks.

Since then, not only the numbers of hits increased but DBpedia also provided for better data quality. We are constantly working on improving accessibility, data quality and stability of the SPARQL endpoint. Kudos to Open Link for maintaining the technical baseline for DBpedia.

The table shows the usage overview of last year.

The full report is available here.

 

Subscribe to the DBpedia Newsletter, check our DBpedia Website and follow us on Twitter, Facebook, and LinkedIn for the latest news.

Thanks for reading and keep using DBpedia!

Yours DBpedia Associaton

 

The post Keep using DBpedia! appeared first on DBpedia Association.

]]>
New DBpedia Release – 2016-10 https://www.dbpedia.org/blog/new-dbpedia-release-2016-10/ Tue, 04 Jul 2017 11:53:03 +0000 http://blog.dbpedia.org/?p=435 We are happy to announce the new DBpedia Release. This release is based on updated Wikipedia dumps dating from October 2016. You can download the new DBpedia datasets in N3 / TURTLE serialisation from http://wiki.dbpedia.org/downloads-2016-10 or directly here http://downloads.dbpedia.org/2016-10/. This release took us longer than expected. We had to deal with multiple issues and included […]

The post New DBpedia Release – 2016-10 appeared first on DBpedia Association.

]]>
We are happy to announce the new DBpedia Release.

This release is based on updated Wikipedia dumps dating from October 2016.

You can download the new DBpedia datasets in N3 / TURTLE serialisation from http://wiki.dbpedia.org/downloads-2016-10 or directly here http://downloads.dbpedia.org/2016-10/.

This release took us longer than expected. We had to deal with multiple issues and included new data. Most notable is the addition of the NIF annotation datasets for each language, recording the whole wiki text, its basic structure (sections, titles, paragraphs, etc.) and the included text links. We hope that researchers and developers, working on NLP-related tasks, will find this addition most rewarding. The DBpedia Open Text Extraction Challenge (next deadline Mon 17 July for SEMANTiCS 2017) was introduced to instigate new fact extraction based on these datasets.

We want to thank anyone who has contributed to this release, by adding mappings, new datasets, extractors or issue reports, helping us to increase coverage and correctness of the released data.  The European Commission and the ALIGNED H2020 project for funding and general support.

You want to read more about the  New Release? Click below for further  details.[expander_maker id=”1″ more=”Read more” less=”Read less”]

 Statistics

Altogether the DBpedia 2016-10 release consists of 13 billion (2016-04: 11.5 billion) pieces of information (RDF triples) out of which 1.7 billion (2016-04: 1.6 billion) were extracted from the English edition of Wikipedia, 6.6 billion (2016-04: 6 billion) were extracted from other language editions and 4.8 billion (2016-04: 4 billion) from Wikipedia Commons and Wikidata.

In addition, adding the large NIF datasets for each language edition (see details below) increased the number of triples further by over 9 billion, bringing the overall count up to 23 billion triples.

Changes

  • The NLP Interchange Format (NIF) aims to achieve interoperability between Natural Language Processing (NLP) tools, language resources and annotations. To extend the versatility of DBpedia, furthering many NLP-related tasks, we decided to extract the complete human- readable text of any Wikipedia page (‘nif_context’), annotated with NIF tags. For this first iteration, we restricted the extent of the annotations to the structural text elements directly inferable by the HTML (‘nif_page_structure’). In addition, all contained text links are recorded in a dedicated dataset (‘nif_text_links’).
    The DBpedia Association started the Open Extraction Challenge on the basis of these datasets. We aim to spur knowledge extraction from Wikipedia article texts in order to dramatically broaden and deepen the amount of structured DBpedia/Wikipedia data and provide a platform for benchmarking various extraction tools with this effort.
    If you want to participate with your own NLP extraction engine, the next deadline for the SEMANTICS 2017 is July 17th.
    We included an example of these structures in section five of the download-page of this release.
  • A considerable amount of work has been done to streamline the extraction process of DBpedia, converting many of the extraction tasks into an ETL setting (using SPARK). We are working in concert with the Semantic Web Company to further enhance these results by introducing a workflow management environment to increase the frequency of our releases.

In case you missed it, what we changed in the previous release (2016-04)

  • We added a new extractor for citation data that provides two files:
    • citation links: linking resources to citations
    • citation data: trying to get additional data from citations. This is a quite interesting dataset but we need help to clean it up
  • In addition to normalised datasets to English DBpedia (en-uris), we additionally provide normalised datasets based on the DBpedia Wikidata (DBw) datasets (wkd-uris). These sorted datasets will be the foundation for the upcoming fusion process with wikidata. The DBw-based uris will be the only ones provided from the following releases on.
  • We now filter out triples from the Raw Infobox Extractor that are already mapped. E.g. no more “<x> dbo:birthPlace <z>” and “<x> dbp:birthPlace|dbp:placeOfBirth|… <z>” in the same resource. These triples are now moved to the “infobox-properties-mapped” datasets and not loaded on the main endpoint. See issue 22 for more details.
  • Major improvements in our citation extraction. See here for more details.
  • We incorporated the statistical distribution approach of Heiko Paulheim in creating type statements automatically and providing them as additional datasets (instance_types_sdtyped_dbo).

 

Upcoming Changes

  • DBpedia Fusion: We finally started working again on fusing DBpedia language editions. Johannes Frey is taking the lead in this project. The next release will feature intermediate results.
  • Id Management: Closely pertaining to the DBpedia Fusion project is our effort to introduce our own Id/IRI management, to become independent of Wikimedia created IRIs. This will not entail changing out domain or entity naming regime, but providing the possibility of adding entities of any source or scope.
  • RML Integration: Wouter Maroy did already provide the necessary groundwork for switching the mappings wiki to an RML based approach on Github. Wouter started working exclusively on implementing the Git based wiki and the conversion of existing mappings last week. We are looking forward to the consequent results of this process.
  • Further development of SPARK Integration and workflow-based DBpedia extraction, to increase the release frequency.

 

New Datasets

  • New languages extracted from Wikipedia:

South Azerbaijani (azb), Upper Sorbian (hsb), Limburgan (li), Minangkabau (min), Western Mari (mrj), Oriya (or), Ossetian (os)

  • SDTypes: We extended the coverage of the automatically created type statements (instance_types_sdtyped_dbo) to English, German and Dutch.
  • Extensions: In the extension folder (2016-10/ext) we provide two new datasets (both are to be considered in an experimental state:
    • DBpedia World Facts: This dataset is authored by the DBpedia Association itself. It lists all countries, all currencies in use and (most) languages spoken in the world as well as how these concepts relate to each other (spoken in, primary language etc.) and useful properties like iso codes (ontology diagram). This Dataset extends the very useful LEXVO dataset with facts from DBpedia and the CIA Factbook. Please report any error or suggestions in regard to this dataset to Markus.
    • JRC-Alternative-Names: This resource is a link based complementary repository of spelling variants for person and organisation names. The data is multilingual and contains up to hundreds of variations entity. It was extracted from the analysis of news reports by the Europe Media Monitor (EMM) as available on JRC-Names.

 Community

The DBpedia community added new classes and properties to the DBpedia ontology via the mappings wiki. The DBpedia 2016-04 ontology encompasses:

  • 760 classes
  • 1,105 object properties
  • 1,622 datatype properties
  • 132 specialised datatype properties
  • 414 owl:equivalentClass and 220 owl:equivalentProperty mappings external vocabularies

The editor community of the mappings wiki also defined many new mappings from Wikipedia templates to DBpedia classes. For the DBpedia 2016-10 extraction, we used a total of 5887 template mappings (DBpedia 2015-10: 5800 mappings). The top language, gauged by the number of mappings, is Dutch (648 mappings), followed by the English community (606 mappings).[/expander_maker]

 Credits to

  • Markus Freudenberg (University of Leipzig / DBpedia Association) for taking over the whole release process and creating the revamped download & statistics pages.
  • Dimitris Kontokostas (University of Leipzig / DBpedia Association) for conveying his considerable knowledge of the extraction and release process.
  • All editors that contributed to the DBpedia ontology mappings via the Mappings Wiki.
  • The whole DBpedia Internationalization Committee for pushing the DBpedia internationalization forward.
  • Václav Zeman and the whole LHD team (University of Prague) for their contribution of additional DBpedia types
  • Alan Meehan (TCD) for performing a big external link cleanup
  • Aldo Gangemi (LIPN University, France & ISTC-CNR, Italy) for providing the links from DOLCE to DBpedia ontology.
  • SpringerNature for offering a co-internship to a bright student and developing a closer relation to DBpedia on multiple issues, as well as Links to their SciGraph subjects.
  • Kingsley Idehen, Patrick van Kleef, and Mitko Iliev (all OpenLink Software) for loading the new data set into the Virtuoso instance that provides 5-Star Linked Open Data publication and SPARQL Query Services.
  • OpenLink Software (http://www.openlinksw.com/) collectively for providing the SPARQL Query Services and Linked Open Data publishing infrastructure for DBpedia in addition to their continuous infrastructure support.
  • Ruben Verborgh from Ghent University – imec for publishing the dataset as Triple Pattern Fragments, and imec for sponsoring DBpedia’s Triple Pattern Fragments server.
  • Ali Ismayilov (University of Bonn) for extending and cleaning of the DBpedia Wikidata dataset.
  • All the GSoC students and mentors which have directly or indirectly worked on the DBpedia release
  • Special thanks to members of the DBpedia Association, the AKSW and the Department for Business Information Systems of the University of Leipzig.

The work on the DBpedia 2016-10 release was financially supported by the European Commission through the project ALIGNED – quality-centric, software and data engineering.

More information about DBpedia is found at http://dbpedia.org as well as in the new overview article about the project available at http://wiki.dbpedia.org/Publications.

Have fun with the new DBpedia 2016-10 release!

The post New DBpedia Release – 2016-10 appeared first on DBpedia Association.

]]>