tools Archives - DBpedia Association https://www.dbpedia.org/tools/ Global and Unified Access to Knowledge Graphs Mon, 27 Sep 2021 10:06:27 +0000 en-GB hourly 1 https://wordpress.org/?v=6.4.3 https://www.dbpedia.org/wp-content/uploads/2020/09/cropped-dbpedia-webicon-32x32.png tools Archives - DBpedia Association https://www.dbpedia.org/tools/ 32 32 Databus Mods – Linked Data-driven Enrichment of Metadata https://www.dbpedia.org/blog/databus-mods-linked-data-driven-enrichment-of-metadata/ https://www.dbpedia.org/blog/databus-mods-linked-data-driven-enrichment-of-metadata/#respond Mon, 09 Aug 2021 07:42:17 +0000 https://www.dbpedia.org/?p=4895 DBpedia Databus Feature – Over the last few months, we gave our DBpedia members multiple chances to present their work, tools, and applications. In this way, our members have given exclusive insights on the DBpedia blog. This week we will start the DBpedia Databus Feature, which allows you to get more information about current and […]

The post Databus Mods – Linked Data-driven Enrichment of Metadata appeared first on DBpedia Association.

]]>
DBpedia Databus Feature – Over the last few months, we gave our DBpedia members multiple chances to present their work, tools, and applications. In this way, our members have given exclusive insights on the DBpedia blog. This week we will start the DBpedia Databus Feature, which allows you to get more information about current and future developments around DBpedia and the DBpedia Databus. Have fun while reading!

As a review, the DBpedia Databus is a digital factory platform that aims to support FAIRness by facilitating a registry of files (on the Web) using DataID metadata. In a broader perspective, the Databus is part of DBpedia’s Vision which aims to establish a FAIR Linked Data backbone by building an ecosystem using its stable identifiers as a central component. Currently, this ecosystem consists of the Databus file registry, DBpedia Archivo, and the DBpedia Global ID management.

As part of this vision, this article presents Databus Mods, a flexible metadata enrichment mechanism for files published on the Databus using Linked Data technologies. 

Databus Mods are activities analyzing and assessing files published with the Databus DataID that provide additional metadata in the form of fine-grained information containing data summaries, statistics, or descriptive metadata enrichments. 

These activities create provenance metadata based on PROV-O to link any generated metadata to the persistent Databus file identifiers, independent of its publisher. The generated metadata is provided in a SPARQL endpoint and an HTTP file server, increasing (meta)data discovery and access. 

Additionally, this thesis proposes the Databus Mods Architecture, which uses a master-worker approach to automate Databus file metadata enrichments. The Mod Master service monitors the Databus SPARQL endpoint for updates, distributes scheduled activities to Mod Workers, collects the generated metadata, and stores it uniformly. Mod Workers implement the metadata model and provide an HTTP interface for the Mod Master to invoke a Mod Activity for a specific Databus file. The Mod Master can handle multiple Mod Workers of the same type concurrently, allowing scaling the system’s throughput.

The Databus Mods Architecture implementation is provided in a public accessible GitHub repository, allowing other users to deploy their Mods reusing existing components. Further, the repository contains a maven library that can be used to create your own Mod Workers in JVM-like languages or validate the implementation of the so-called Mod API, which is necessary for the Mod Master to control a Mod Worker.

Currently, the DBpedia Databus provides five own initial Databus Mod Workers. The following paragraphs showcase two essential Mods, the first feasible for all Databus files and the second specific for RDF files.

MIME-Type Mod. This essential Mod provides metadata for other applications or Mods about the specific MIME-Type of Databus files. The MIME-Type Mod analyzes every file on the Databus, sniffs on their data using Apache Tika, and generates metadata that assigns detected IANA Media Types to Databus file identifiers using the Mods metadata model.

VoID-Mod. The Vocabulary of Interlinked Datasets (VoID) is a popular metadata vocabulary to describe the content of Linked Datasets. The VoiD Mod generates statistics based on the RDF VoID vocabulary for RDF files. A major use case of the VoID Mod is to search for relevant RDF datasets by using the VoID Mod metadata. By writing federated queries, it is possible to filter files on the Databus that have to contain specific properties or classes.

Listing 12: Federated query over VoID Mod Results and the DataID to retrieve Databus files containing RDF statements having dbo:bithData as property or dbo:Person as type. The results are filtered by dct:version and dataid:account

Example: Federated query over VoID Mod Results and the DataID to retrieve Databus files containing RDF statements having dbo:birthDate as property or dbo:Person as type. The results are filtered by dct:version and dataid:account.

Databus Mods were created as part of my master’s thesis, which I submitted in spring 2021.

Stay safe and check Twitter or LinkedIn. Furthermore, you can subscribe to our Newsletter for the latest news and information around DBpedia.

Marvin Hofer

on behalf of the DBpedia Association

The post Databus Mods – Linked Data-driven Enrichment of Metadata appeared first on DBpedia Association.

]]>
https://www.dbpedia.org/blog/databus-mods-linked-data-driven-enrichment-of-metadata/feed/ 0
ContextMinds: Concept mapping supported by DBpedia https://www.dbpedia.org/blog/contextminds/ https://www.dbpedia.org/blog/contextminds/#respond Fri, 16 Apr 2021 08:51:11 +0000 https://www.dbpedia.org/?p=4498 Contribution from Marek Dudáš (Prague University of Economics and Business – VŠE) ContextMinds is a tool that combines two ideas: concept mapping and knowledge graphs. What’s concept mapping? With a bit of simplification, when you take a small subgraph of not more than a few tens of nodes from a knowledge graph (kg) and visualize it with the classic node-link (or “bubbles and arrows”) approach, you get a concept map. But […]

The post ContextMinds: Concept mapping supported by DBpedia appeared first on DBpedia Association.

]]>
Contribution from Marek Dudáš (Prague University of Economics and Business – VŠE)

ContextMinds is a tool that combines two ideas: concept mapping and knowledge graphs. What’s concept mapping? With a bit of simplification, when you take a small subgraph of not more than a few tens of nodes from a knowledge graph (kg) and visualize it with the classic node-link (or “bubbles and arrows”) approach, you get a concept map. But concept maps are much older than knowledge graphs. They emerged in the 70’s and were originally intended to be created by hand. This was done to represent a person’s understanding of a given problem or question. Shortly after their “discovery” (using diagrams to represent relationships is probably much older idea), they turned out to be a very useful educational tool. 

Going back to knowledge graphs and DBpedia, ContextMinds lets you quickly create an overview of some problem you need to solve, study or explain.  

Figure 1 Text search in concepts from DBpedia: starting point of concept map creation in ContextMinds. 

How you can start 

Starting from a classic text search, you select concepts (nodes) from a knowledge graph, ContextMinds shows how they are related (loads the links from the knowledge graph). It also suggest you what other concepts are there in the kg that you might be interested in. The suggestions are brought from the joint neighborhood of the nodes you already selected and put into the view. Nodes are scored by relevance, basically by the number of links to what you have in the view. So, as you are creating your concept map, an always updated list of around 30 most related concepts is available for simple drag & dropping to your map.  

Figure 2 Concept map and a list of top related concepts found in DBpedia by ContextMinds. 

This helps you make the concept map complete quickly. It also helps to discover relationships between the concepts that you were not aware of. If a concept or relationship is not there yet in the knowledge graph, you can create it. It will not only appear in your concept map, but will also become a part of an extended knowledge for anyone who has access to your map. You can at any time select the sources of concept & relationship suggestions. To do that you can choose any combination of the personal scope (concepts from maps created by you), workspace scope (shared space with teammates), DBpedia (or a different kg) and public scope (everything created by the community and made public). 

The best way of explaining how it works is a short video.

Use Case: Knowledge Graph 

ContextMinds was of course built with DBpedia as the initial knowledge graph. That instance is available at app.contextminds.com and more than 100 schools are using it as an educational aid. Recently, we discovered that the same model can be useful with other knowledge graphs. 

Say you run some machine learning that helps you identify some objects in the knowledge graph as having some interesting properties. Now you might need to look at what is there in the graph about them to either explain the results or show the results to domain experts so that they can use them for further research. And that is where ContextMinds comes in. You put the concepts from the machine learning results into the view and ContextMinds automatically adds the links between them and finds related concepts from their neighborhood. We have done this with kg-covid, a knowledge graph built from various biomedical and Covid-related datasets. There we use RDFrules to mine interesting rules and then visualize the results in ContextMinds (available at contextminds.vse.cz). Because of that biology experts may interpret them and explore further related information. More about that maybe later in another blogpost.

Our Vision 

An additional fun fact: since we started developing ContextMinds to work solely with DBpedia, its data model is kind-of hard-coded in it. Although the plan is to enable loading multiple knowledge graphs into single ContextMinds instance so that the user may interconnect objects from DBpedia with those from other datasets when creating the concept map at the moment we have to transform the data so that they look like DBpedia to be loaded into ContextMinds. 

A big thank you to ContextMinds, especially Marek Dudáš for presenting how ContextMinds combines concept mapping and knowledge graphs.

Yours,

DBpedia Association

The post ContextMinds: Concept mapping supported by DBpedia appeared first on DBpedia Association.

]]>
https://www.dbpedia.org/blog/contextminds/feed/ 0
ImageSnippets and DBpedia https://www.dbpedia.org/blog/imagesnippets-and-dbpedia/ Wed, 18 Dec 2019 12:15:33 +0000 https://blog.dbpedia.org/?p=1291  by Margaret Warren  The following post introduces to you ImageSnippets and how this tool profits from the use of DBpedia. ImageSnippets – A Tool for Image Curation For over two decades, ImageSnippets has been evolving as an ontology and data-driven framework for image annotation research. Representing the informal knowledge people have about the context and provenance […]

The post ImageSnippets and DBpedia appeared first on DBpedia Association.

]]>
 by Margaret Warren 

The following post introduces to you ImageSnippets and how this tool profits from the use of DBpedia.

ImageSnippets – A Tool for Image Curation

For over two decades, ImageSnippets has been evolving as an ontology and data-driven framework for image annotation research. Representing the informal knowledge people have about the context and provenance of images as RDF/linked data is challenging, but it has also been an enlightening and engaging journey in not only applying formal semantic web theory to building image graphs but also to weave together our interests with what others have been doing in the field of semantic annotation and knowledge graph building over these many years. 

DBpedia provides the entities for our RDF descriptions

Since the beginning, we have always made use of DBpedia and other publicly available datasets to provide the entities for use in our RDF descriptions.  Though ImageSnippets can be used to build special vocabularies around niche domains, our primary research is around relation ontology building and we prefer to avoid the creation of new entities unless we absolutely can not find them through any other service.

When we first went live with our basic system in 2013, we began hand-building tens of thousands of triples using terms primarily from DBpedia (the core of the linked data cloud.) While there would often be an overlap of terms with other datasets – almost a case of too many choices – we formed a best practice of preferentially using DBpedia terms as often as possible, because they gave us the most utility for reasoning using the SKOS concepts built into the DBpedia service. We have also made extensive use of DBpedia Spotlight for named-entity extraction.

How to combine DBpedia & Wikidata and make it useful for ImageSnippets

But the addition of the Wikidata Query Service over the past 18 months or so has now given us an even more unique challenge: how to work with both! Since DBpedia and Wikidata both have class relationships that we can reason from, we found ourselves in a position to be able to examine both DBpedia and Wikidata in concert with each other through the use of mapping techniques between the two datasets.

How it works: ImageSnippets & DBpedia

When an image is saved, we build inference graphs over results from both DBpedia and Wikidata. These graphs can be revealed with simple SPARQL queries at our endpoint and queries from subclasses, taxons and SKOS concepts can find image results in our custom search tool.  We have also just recently added a pathfinder utility – highly useful for semantic explainability as it will return the precise path of connections from an originating source entity to the target entity that was used in our custom image search.

Sometimes a query will produce very unintuitive results, and the pathfinder tool enables us to quickly locate semantic errors which lead to clearly erroneous misclassifications (for example, a search for the Wikidata subclass of ‘communication medium’ reveals images of restaurants and hotels because of misclassifications in Wikidata.) In this way we can quickly troubleshoot the results of queries, using the images as visual cues to explore the accuracy of the semantic modelling in both datasets.


We are very excited with the new directions that we feel can come of our knitting together of the two knowledge graphs through the use of our visual interface and believe there is a great potential for ImageSnippets to serve a more complex role in cleaning and aligning the two datasets, using the images as our guides.

A big thank you to Margaret Warren for providing some insights into her work at ImageSnippets.

Yours,

DBpedia Association

The post ImageSnippets and DBpedia appeared first on DBpedia Association.

]]>
SEMANTiCS Interview: Dan Weitzner https://www.dbpedia.org/blog/semantics-interview-dan-weitzner/ Tue, 20 Aug 2019 11:45:56 +0000 https://blog.dbpedia.org/?p=1215 As the upcoming 14th DBpedia Community Meeting, co-located with SEMANTiCS 2019 in Karlsruhe, Sep 9-12, is drawing nearer, we like to take that opportunity to introduce you to our DBpedia keynote speakers. Today’s post features an interview with Dan Weitzner from WPSemantix who talks about timbr-DBpedia, which we blogged about recently, as well as future […]

The post SEMANTiCS Interview: Dan Weitzner appeared first on DBpedia Association.

]]>
As the upcoming 14th DBpedia Community Meeting, co-located with SEMANTiCS 2019 in Karlsruhe, Sep 9-12, is drawing nearer, we like to take that opportunity to introduce you to our DBpedia keynote speakers.

Today’s post features an interview with Dan Weitzner from WPSemantix who talks about timbr-DBpedia, which we blogged about recently, as well as future trends and challenges of linked data and the semantic web.

Dan Weitzner is co-founder and Vice President of Research and Development of WPSemantix. He obtained his Bachelor of Science in Computer Science from Florida Atlantic University. In collaboration with DBpedia, he and his colleagues at WPSemantix launched timbr, the first SQL Semantic Knowledge Graph that integrates Wikipedia and Wikidata Knowledge into SQL engines.

Dan Weitzner

Can you tell us something about your research focus?

WPSemantix bridges the worlds of standard databases and the Semantic Web by creating ontologies accessible in standard SQL. 

Our platform – timbr is a virtual knowledge graph that maps existing data-sources to abstract concepts, accessible directly in all the popular Business Intelligence (BI) tools and also natively integrated into Apache Spark, R, Python, Java and Scala. 

timbr enables reasoning and inference for complex analytics without the need for costly Extract-Transform-Load (ETL) processes to graph databases.

How do you personally contribute to the advancement of semantic technologies?

We believe we have lowered the fundamental barriers to adoption of semantic technologies for large organizations who want to benefit from knowledge graph capabilities without firstly requiring fundamental changes in their database infrastructure and secondly, without requiring expensive organizational changes or significant personnel retraining.  

Additionally, we implemented the W3C Semantic Web principles to enable inference and inheritance between concepts in SQL, and to allow seamless integration of existing ontologies from OWL. Subsequently, users across organizations can do complex analytics using the same tools that they currently use to access and query their databases, and in addition, to facilitate the sophisticated query of big data without requiring highly technical expertise.  
timbr-DBpedia is one example of what can be achieved with our technology. This joint effort with the DBpedia Association allows semantic SQL query of the DBpedia knowledge graph, and the semantic integration of the DBpedia knowledge into data warehouses and data lakes. Finally, timbr-DBpedia allows organizations to benefit from enriching their data with DBpedia knowledge, combining it with machine learning and/or accessing it directly from their favourite BI tools.Which trends and challenges do you see for linked data and the semantic web?

Currently, the use of semantic technologies for data exploration and data integration is a significant trend followed by data-driven communities. It allows companies to leverage the relationship-rich data to find meaningful insights into their data. 

One of the big difficulties for the average developer and business intelligence analyst is the challenge to learn semantic technologies. Another one is to create ontologies that are flexible and easily maintained. We aim to solve both challenges with timbr.

Which application areas for semantic technologies do you perceive as most promising?

I think semantic technologies will bloom in applications that require data integration and contextualization for machine learning models.

Ontology-based integration seems very promising by enabling accurate interpretation of data from multiple sources through the explicit definition of terms and relationships – particularly in big data systems,  where ontologies could bring consistency, expressivity and abstraction capabilities to the massive volumes of data.As artificial intelligence becomes more and more important, what is your vision of AI?

I envision knowledge-based business intelligence and contextualized machine learning models. This will be the bedrock of cognitive computing as any analysis will be semantically enriched with human knowledge and statistical models.

This will bring analysts and data scientists to the next level of AI.

What are your expectations about Semantics 2019 in Karlsruhe?

I want to share our vision with the semantic community and I would also like to learn about the challenges, vision and expectations of companies and organizations dealing with semantic technologies. I will present “timbr-DBpedia – Exploration and Query of DBpedia in SQL”

The End

Visit SEMANTiCS 2019 in Karlsruhe, Sep 9-12 and find out more about timbr-DBpedia and all the other new developments at DBpedia. Get your tickets for our community meeting here. We are looking forward to meeting you during DBpedia Day.

Yours DBpedia Association

The post SEMANTiCS Interview: Dan Weitzner appeared first on DBpedia Association.

]]>
Call for Participation: DBpedia meetup @ XML Prague https://www.dbpedia.org/blog/dbpedia-meetup-xml-prague/ Wed, 26 Dec 2018 18:37:22 +0000 https://blog.dbpedia.org/?p=1089 We are happy to announce that the upcoming DBpedia meetup will be held in Prague, Czech Republic. During the XML conference Prague , Feb 7-9,  the DBpedia Community will get together on February 7, 2019. Highlights – Intro: DBpedia: Global and Unified Access to Knowledge (Graphs) – DBpedia Databus presentation – DBpedia Showcase Session Quick […]

The post Call for Participation: DBpedia meetup @ XML Prague appeared first on DBpedia Association.

]]>
We are happy to announce that the upcoming DBpedia meetup will be held in Prague, Czech Republic. During the XML conference Prague , Feb 7-9,  the DBpedia Community will get together on February 7, 2019.

Highlights

– Intro: DBpedia: Global and Unified Access to Knowledge (Graphs)

– DBpedia Databus presentation

– DBpedia Showcase Session

Quick Facts

– Web URL: https://wiki.dbpedia.org/meetings/Prague2019

– When: February 7th, 2019

– Where: University of Economics, nam. W. Churchilla 4, 130 67 Prague 3, Czech Republic

Schedule

– Please check the schedule for the upcoming DBpedia meetup here: https://wiki.dbpedia.org/meetings/Prague2019

Tickets

– Attending the DBpedia Community Meetup costs €40. DBpedia members get free admission, please contact your nearest DBpedia chapter or the DBpedia Association for a promotion code.

– You need to buy a ticket. Please check all details here: http://www.xmlprague.cz/conference-registration/

Sponsors and Acknowledgements

– XML conference Prague (http://www.xmlprague.cz/)

– Institute for Applied Informatics (http://infai.org/en/AboutInfAI)

– OpenLink Software (http://www.openlinksw.com/)

Organisation

-Milan Dojčinovski, AKSW/KILT

– Julia Holze, DBpedia Association

– Sebastian Hellmann, AKSW/KILT, DBpedia Association

– Tomáš Kliegr, KIZI/University of Economics, Prague

Tell us what cool things you do with DBpedia. If you would like to give a talk at the DBpedia meetup, please get in contact with the DBpedia Association.

We are looking forward to meeting you in Prague!

For latest news and updates check Twitter, Facebook and our Website or subscribe to our newsletter.

Your DBpedia Association

The post Call for Participation: DBpedia meetup @ XML Prague appeared first on DBpedia Association.

]]>
Chaudron, chawdron , cauldron and DBpedia https://www.dbpedia.org/blog/chaudron/ Tue, 30 Oct 2018 09:29:56 +0000 http://blog.dbpedia.org/?p=762 Meet Chaudron Before getting into the technical details of, did you know the term Chaudron derives from Old French and denotes a large metal cooking pot? The word was used as an alternative form of chawdron which means entrails.  Entrails and cauldron –  a combo that seems quite fitting with Halloween coming along. And now for something completely […]

The post Chaudron, chawdron , cauldron and DBpedia appeared first on DBpedia Association.

]]>
Meet Chaudron

Before getting into the technical details of, did you know the term Chaudron derives from Old French and denotes a large metal cooking pot? The word was used as an alternative form of chawdron which means entrails.  Entrails and cauldron –  a combo that seems quite fitting with Halloween coming along.

And now for something completely different

To begin with, Chaudron is a dataset of more than two million triples. It complements DBpedia with physical measures. The triples are automatically extracted from Wikipedia infoboxes using a pattern-matching and a formal grammar approaches.  This dataset adds triples to the existing DBpedia resources. Additionally, it includes measures on various resources such as chemical elements, railway, people places, aircrafts, dams and many other types of resources.

Chaudron was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.

Want to find out more about our DBpedia Applications? Why not read about the DBpedia Chatbot, DBpedia Entity or the NLI-Go DBpedia Demo.?

Happy reading & happy Halloween!

Yours DBpedia Association

 

PS: In case you want your DBpedia tool, demo or any kind of application published on our Website and the DBpedia Blog, fill out this form and submit your information.

 

Powered by WPeMatico

The post Chaudron, chawdron , cauldron and DBpedia appeared first on DBpedia Association.

]]>
Retrospective: GSoC 2018 https://www.dbpedia.org/blog/the-return-gsoc-2018/ Wed, 10 Oct 2018 14:01:39 +0000 https://blog.dbpedia.org/?p=999 With all the beta-testing, the evaluations of the community survey part I and part II and the preparations for the Semantics 2018 we lost almost sight of telling you about the final results of GSoC 2018. Following we present you a short recap of this year’s students and projects that made it to the finishing […]

The post Retrospective: GSoC 2018 appeared first on DBpedia Association.

]]>
With all the beta-testing, the evaluations of the community survey part I and part II and the preparations for the Semantics 2018 we lost almost sight of telling you about the final results of GSoC 2018. Following we present you a short recap of this year’s students and projects that made it to the finishing line of GSoC 2018.

 

Et Voilà

We started out with six students that committed to GSoC projects. However, in the course of the summer, some dropped out or did not pass the midterm evaluation. In the end, we had three finalists that made it through the program.

Meet Bharat Suri

… who worked on “Complex Embeddings for OOV Entities”. The aim of this project was to enhance the DBpedia Knowledge Base by enabling the model to learn from the corpus and generate embeddings for different entities, such as classes, instances and properties.  His code is available in his GitHub repository. Tommaso Soru, Thiago Galery and Peng Xu supported Bharat throughout the summer as his DBpedia mentors.

Meet Victor Fernandez

.. who worked on a “Web application to detect incorrect mappings across DBpedia’s in different languages”. The aim of his project was to create a web application and API to aid in automatically detecting inaccurate DBpedia mappings. The mappings for each language are often not aligned, causing inconsistencies in the quality of the RDF generated. The final code of this project is available in Victor’s repository on GitHub. He was mentored by Mariano Rico and Nandana Mihindukulasooriya.

Meet Aman Mehta

.. whose project aimed at building a model which allows users to query DBpedia directly using natural language without the need to have any previous experience in SPARQL. His task was to train a Sequence-2-Sequence Neural Network model to translate any Natural Language Query (NLQ) into the corresponding sentence encoding SPARQL query. See the results of this project in Aman’s GitHub repositoryTommaso Soru and Ricardo Usbeck were his DBpedia mentors during the summer.

Finally, these projects will contribute to an overall development of DBpedia. We are very satisfied with the contributions and results our students produced.  Furthermore, we like to genuinely thank all students and mentors for their effort. We hope to be in touch and see a few faces again next year.

A special thanks goes out to all mentors and students whose projects did not make it through.

GSoC Mentor Summit

Now it is the mentors’ turn to take part in this year GSoC mentor summit, October 12th till 14th. This year, Mariano Rico and Thiago Galery will represent DBpedia at the event. Their task is to engage in a vital discussion about this years program, about lessons learned, highlights and drawbacks they experienced during the summer. Hopefully, they return with new ideas from the exchange with mentors from other open source projects. In turn, we hope to improve our part of the program for students and mentors.

Sit tight, follow us on Twitter and we will update you about the event soon.

Yours DBpedia Association

The post Retrospective: GSoC 2018 appeared first on DBpedia Association.

]]>
DBpedia Chapters – Survey Evaluation – Episode Two https://www.dbpedia.org/blog/dbpedia-chapters-survey-evaluation-episode-two/ Tue, 02 Oct 2018 15:38:13 +0000 https://blog.dbpedia.org/?p=983 Welcome back to part two of the evaluation of the surveys, we conducted with the DBpedia chapters. Survey Evaluation – Episode Two The second survey focused on technical matters. We asked the chapters about the usage of DBpedia services and tools, technical problems and challenges and potential reasons to overcome them.  Have a look below. Again, only […]

The post DBpedia Chapters – Survey Evaluation – Episode Two appeared first on DBpedia Association.

]]>
Welcome back to part two of the evaluation of the surveys, we conducted with the DBpedia chapters.

Survey Evaluation – Episode Two

The second survey focused on technical matters. We asked the chapters about the usage of DBpedia services and tools, technical problems and challenges and potential reasons to overcome them.  Have a look below.

Again, only nine out of 21 DBpedia chapters participated in this survey. And again, that means, the results only represent roughly 42% of the DBpedia chapter population

The good news is, all chapters maintain a local DBpedia endpoint. Yay! More than 55 % of the chapters perform their own extraction. The rest of them apply a hybrid approach reusing some datasets from DBpedia releases and additionally, extract some on their own.

Datasets, Services and Applications

In terms of frequency of dataset updates, the situation is as follows:  44,4 % of the chapters update them once a year. The answers of the remaining ones differ in equal shares, depending on various factors. See the graph below. 

When it comes to the maintenance of links to local datasets, most of the chapters do not have additional ones. However, some do maintain links to, for example, Greek WordNet, the National Library of Greece Authority record, Geonames.jp and the Japanese WordNet. Furthermore, some of the chapters even host other datasets of local interest, but mostly in a separate endpoint, so they keep separate graphs.

Apart from hosting their own endpoint, most chapters maintain one or the other additional service such as Spotlight, LodLive or LodView.

Moreover,  the chapters have additional applications they developed on top of DBpedia data and services.

Besides, they also gave us some reasons why they were not able to deploy DBpedia related services. See their replies below.

DBpedia Chapter set-up

Lastly, we asked the technical heads of the chapters what the hardest task for setting up their chapter had been.  The answers, again, vary as the starting position of each chapter differed. Read a few of their replies below.

The hardest technical task for setting up the chapter was:

  • to keep virtuoso up to date
  • the chapter specific setup of DBpedia plugin in Virtuoso
  • the Extraction Framework
  • configuring Virtuoso for serving data using server’s FQDN and Nginx proxying
  • setting up the Extraction Framework, especially for abstracts
  • correctly setting up the extraction process and the DBpedia facet browser
  • fixing internationalization issues, and updating the endpoint
  • keeping the extraction framework working and up to date
  • updating the server to the specific requirements for further compilation – we work on Debian

Final  words

With all the data and results we gathered, we will get together with our chapter coordinator to develop a strategy of how to improve technical as well as organizational issues the surveys revealed. By that, we hope to facilitate a better exchange between the chapters and with us, the DBpedia Association. Moreover, we intend to minimize barriers for setting up and maintaining a DBpedia chapter so that our chapter community may thrive and prosper.

In the meantime, spread your work and share it with the community. Do not forget to follow and tag us on Twitter ( @dbpedia ). You may also want to subscribe to our newsletter.

We will keep you posted about any updates and news.

Yours

DBpedia Association

The post DBpedia Chapters – Survey Evaluation – Episode Two appeared first on DBpedia Association.

]]>
Meet the DBpedia Chatbot https://www.dbpedia.org/blog/dbpedia-chatbot-2/ Wed, 22 Aug 2018 10:33:24 +0000 http://blog.dbpedia.org/?p=763 This year’s GSoC is slowly coming to an end with final evaluations already being submitted. In order to bridge the waiting time until final results are published, we like to draw your attention to a former project and great tool that was developed during last years’ GSoC. Meet the DBpedia Chatbot.  DBpedia Chatbot is a conversational […]

The post Meet the DBpedia Chatbot appeared first on DBpedia Association.

]]>

This year’s GSoC is slowly coming to an end with final evaluations already being submitted. In order to bridge the waiting time until final results are published, we like to draw your attention to a former project and great tool that was developed during last years’ GSoC.

Meet the DBpedia Chatbot. 

DBpedia Chatbot is a conversational Chatbot for DBpedia which is accessible through the following platforms:

  1. A Web Interface
  2. Slack
  3. Facebook Messenger

Main Purpose

The bot is capable of responding to users in the form of simple short text messages or through more elaborate interactive messages. Users can communicate or respond to the bot through text and also through interactions (such as clicking on buttons/links). There are 4 main purposes for the bot. They are:

  1. Answering factual questions
  2. Answering questions related to DBpedia
  3. Expose the research work being done in DBpedia as product features
  4. Casual conversation/banter
Question Types

The bot tries to answer text-based questions of the following types:

Natural Language Questions
  1. Give me the capital of Germany
  2. Who is Obama?
Location Information
  1. Where is the Eiffel Tower?
  2. Where is France’s capital?
Service Checks

Users can ask the bot to check if vital DBpedia services are operational.

  1. Is DBpedia down?
  2. Is lookup online?
Language Chapters

Users can ask basic information about specific DBpedia local chapters.

  1. DBpedia Arabic
  2. German DBpedia
Templates

These are predominantly questions related to DBpedia for which the bot provides predefined templatized answers. Some examples include:

  1. What is DBpedia?
  2. How can I contribute?
  3. Where can I find the mapping tool?
Banter

Messages which are casual in nature fall under this category. For example:

  1. Hi
  2. What is your name?

if you like to have a closer look at the internal processes and how the chatbot was developed, check out the DBpedia GitHub pages. 

DBpedia Chatbot was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.

Powered by WPeMatico

In case you want your DBpedia based tool or demo to publish on our website just follow the link and submit your information, we will do the rest.

 

Yours

DBpedia Association

The post Meet the DBpedia Chatbot appeared first on DBpedia Association.

]]>
DBpedia Entity – Standard Test Collection for Entity Search over DBpedia https://www.dbpedia.org/blog/dbpedia-entity/ Tue, 14 Aug 2018 13:07:06 +0000 http://blog.dbpedia.org/?p=820 Today we are featuring DBpedia Entity, in our blog series of introducting interesting DBpedia applications and tools to the DBpedia community and beyond. Read on and enjoy. DBpedia-Entity is a standard test collection for entity search over the DBpedia knowledge base. It is meant for evaluating retrieval systems that return a ranked list of entities (DBpedia […]

The post DBpedia Entity – Standard Test Collection for Entity Search over DBpedia appeared first on DBpedia Association.

]]>

Today we are featuring DBpedia Entity, in our blog series of introducting interesting DBpedia applications and tools to the DBpedia community and beyond. Read on and enjoy.

DBpedia-Entity is a standard test collection for entity search over the DBpedia knowledge base. It is meant for evaluating retrieval systems that return a ranked list of entities (DBpedia URIs) in response to a free text user query.

The first version of the collection (DBpedia-Entity v1) was released in 2013, based on DBpedia v3.7 [1]. It was created by assembling search queries from a number of entity-oriented benchmarking campaigns and mapping relevant results to DBpedia. An updated version of the collection, DBpedia-Entity v2, has been released in 2017, as a result of a collaborative effort between the IAI group of the University of Stavanger, the Norwegian University of Science and Technology, Wayne State University, and Carnegie Mellon University [2]. It has been published at the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR’17), where it received a Best Short Paper Honorable Mention Award. See the paper and poster.

DBpedia Entity was published on wiki.dbpedia.org and is one of many other projects and applications featuring DBpedia.

Powered by WPeMatico

The post DBpedia Entity – Standard Test Collection for Entity Search over DBpedia appeared first on DBpedia Association.

]]>