RonPub

Loading...

RonPub Banner

RonPub -- Research Online Publishing

RonPub (Research online Publishing) is an academic publisher of online, open access, peer-reviewed journals.  RonPub aims to provide a platform for researchers, developers, educators, and technical managers to share and exchange their research results worldwide.

RonPub Is Open Access:

RonPub publishes all of its journals under the open access model, defined under BudapestBerlin, and Bethesda open access declarations:

  • All articles published by RonPub is fully open access and online available to readers free of charge.  
  • All open access articles are distributed under  Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction free of charge in any medium, provided that the original work is properly cited. 
  • Authors retain all copyright to their work.
  • Authors may also publish the publisher's version of their paper on any repository or website. 

RonPub Is Cost-Effective:

To be able to provide open access journals, RonPub defray publishing cost by charging a one-time publication fee for each accepted article. One of RonPub objectives is providing a fast and high-quality but lower-cost publishing service. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors who do not have funds to cover publication fees. We also offer a partial fee waiver for editors and reviewers of RonPub as as reward for their work. See the respective Journal webpage for the concrete publication fee.

RonPub Publication Criteria

What we are most concerned about is the quality, not quantity, of publications. We only publish high-quality scholarly papers. Publication Criteria describes the criteria that should be met for a contribution to be acceptable for publication in RonPub journals.

RonPub Publication Ethics Statement:

In order to ensure the publishing quality and the reputation of the publisher, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

RonPub follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Long-Term Preservation in the German National Library

Our publications are archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete.

Where is RonPub?

RonPub is a registered corporation in Lübeck, Germany. Lübeck is a beautiful coastal city, owing wonderful sea resorts and sandy beaches as well as good restaurants. It is located in northern Germany and is 60 kilometer away from Hamburg.

OJSW Cover
Open Journal of Semantic Web (OJSW)
OJSW, an open access and peer-reviewed online journal, publishes original and creative research results on the Semantic Web. OJSW distributes its articles under the open access model. All articles of OJSW are fully open access and online available to readers free of charge. There is no restriction on the length of the papers. Accepted manuscripts are published online immediately.
Publisher: RonPub UG (haftungsbeschränkt), Lübeck, Germany
Contact: OJSW Editorial Office
ISSN: 2199-336X
Call for Papers: txtUTF-8 txtASCII pdf
OJSW Cover
Open Journal of Semantic Web (OJSW)
OJSW, an open access and peer-reviewed online journal, publishes original and creative research results on the Semantic Web. OJSW distributes its articles under the open access model. All articles of OJSW are fully open access and online available to readers free of charge. There is no restriction on the length of the papers. Accepted manuscripts are published online immediately.
Publisher: RonPub UG (haftungsbeschränkt), Lübeck, Germany
Contact: OJSW Editorial Office
ISSN: 2199-336X
Call for Papers: txtUTF-8 txtASCII pdf

Aims & Scope

The current World Wide Web enables an easy, instant access to a vast amount of online information.  However, the content in the Web is typically for human consumption, and is not tailored to be machine-processed. 

The Semantic Web, which is intended to establish a machine-understandable web, thereby offers a promising and potential solution to mining and analyzing web content. The Semantic Web is currently changing from an emergent trend to a technology used in complex real-world applications.  

 OJSW publishes regular research papers, short communications, reviews and visionary papers in all aspects of web technologies. There is no restriction on the length of the papers. 

  • Regular research papers: being full original findings with adequate experimental research. They make substantial theoretical and empirical contributions to the research field.  Research papers should be written as concise as possible.
  • Short communications: reporting novel research ideas. The work represented should be technically sound and significantly advancing the state of the art. Short communications also include exploratory studies and methodological articles.
  • Research reviews: being insightful and accessible overview of a certain field of research. They conceptualize research issues, synthesize existing findings and advance the understanding of the field. They may also suggest new research issues and directions.
  • Visionary papers:  identify new research issues and future research directions, and describe new research visions in the field. The new visions will potentially have great impact for the future society and daily life. 

We are interested in scientific articles on all aspects of semantic web, including but are not limited to the following topics:

  • Semantic Data Management and Optimization
    • Big Data
    • Graph Databases
    • Federations
    • Spatial Data
  • Rule-based Languages like RIF and SWRL
  • Microformats (e.g. RDFa)
  • Ontology-based Approaches for
    • Modelling
    • Mapping
    • Evolution
    • Real-world ontologies
  • Reasoning Approaches
    • Real-World Applications
    • Efficient Algorithms
  • Linked Data
    • Integration of Heterogeneous Linked Data
    • Real-World Applications
    • Statistics and Visualizations
    • Quality
    • Ranking Techniques
    • Provenance
    • Mining and Consuming Linked Data
  • Semantic Web stream processing
    • Dynamic Data
    • Temporal Semantics
  • Performance and Evaluation of Semantic Web Technologies
    • Benchmarking for Semantic Web Technologies
  • Semantic Web Services
  • Semantic Web Applications in specific domains, e.g.,
    • Life Science,
    • eGovernment,
    • eEnvironment,
    • eHealth

Author Guidelines

Publication Criteria

Publication Criteria provides important information for authors to prepare their manuscripts with a high possibility of being accepted.

Open & Transparent Reviews

RonPub’s OJSW provides two review processes: open & transparent as well as traditional. OJSW authors and reviewers can choose the review process that they prefer.

  1. Open & Transparent Review:
    1. Submitted manuscripts are posted on the journal's website and are publicly available
      Names of authors can be blind: This depends on the authors' wishes.
    2. Manuscripts will be evaluated by the reviewers selected by members of the editorial board.
    3. Manuscripts are also open for evaluation and comments from public.
      See our Open Review page for manuscripts currently under open review.
    4. Comments that do not follow established good scholarly practice will be removed.
    5. Evaluation from the selected reviewers and public participants will be posted on the journal’s website after the first-round review is finished.
      The names of reviewers and of public participants will not be given upon their wishes.
    6. The responses from the authors are posted on the journal’s website.
    7. Editors make a decision on acceptance or rejection based on the review results.
    8. Authors of rejected manuscripts may request to remove their articles and reviews from the journal’s website.
  2. Traditional Review:
    1. Manuscripts will be evaluated by the reviewers selected by members of the editorial board
    2. Editors make a decision on acceptance or rejection based on the review results.
    3. The decision and anonymous reviews of manuscripts will be sent to authors.

Manuscript Preparation

Please prepare your manuscripts using the manuscript template of the journal. It is available for download as word doc docx and latex version zip. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts following the information on the submit pageAuthors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as a PDF file and a word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts. Subsequent enquiries concerning paper progress should be sent to the email address of the journal.

Review Procedure

OJSW is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in OJSW are strictly and thoroughly peer-reviewed. When a manuscript is submitted, the editor-in-chief assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A second round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

Publication Ethics Statement

In order to ensure the publishing quality and the reputation of the journal, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

Our journal follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Articles of OJSW

Archive
Hide Archive Menu
Search Articles in OJSW

 Open Access 

Generating Sound from the Processing in Semantic Web Databases

Sven Groppe, Rico Klinckenberg, Benjamin Warnke

Open Journal of Semantic Web (OJSW), 8(1), Pages 1-27, 2021, Downloads: 1605

Full-Text: pdf | URN: urn:nbn:de:101:1-2022011618330544843704 | GNL-LP: 1249658837 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Databases process a lot of intermediate steps generating many intermediate results during data processing for answering queries. It is not easy to understand these complex tasks and algorithms for students, developers and all those interested in databases. For this purpose, an additional medium is sonification, which maps data to auditory dimensions and offers a new audible experience to their listeners. Hence, we propose a sonification of query processing paired with a corresponding visualization both integrated in a web application. In a demonstration of our approach and in an extensive user evaluation we show that listeners increase their understanding of the operators' functionality and sonification supports easy remembering of requirements like merge joins work on sorted input. Furthermore, new ways of analyzing query processing are possible with our proposed sonification approach.

BibTex:

    @Article{OJSW_2021v8i1n01_Groppe,
        title     = {Generating Sound from the Processing in Semantic Web Databases},
        author    = {Sven Groppe and
                     Rico Klinckenberg and
                     Benjamin Warnke},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2021},
        volume    = {8},
        number    = {1},
        pages     = {1--27},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022011618330544843704},
        urn       = {urn:nbn:de:101:1-2022011618330544843704},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Databases process a lot of intermediate steps generating many intermediate results during data processing for answering queries. It is not easy to understand these complex tasks and algorithms for students, developers and all those interested in databases. For this purpose, an additional medium is sonification, which maps data to auditory dimensions and offers a new audible experience to their listeners. Hence, we propose a sonification of query processing paired with a corresponding visualization both integrated in a web application. In a demonstration of our approach and in an extensive user evaluation we show that listeners increase their understanding of the operators' functionality and sonification supports easy remembering of requirements like merge joins work on sorted input. Furthermore, new ways of analyzing query processing are possible with our proposed sonification approach.}
    }

 Open Access 

NextGen Multi-Model Databases in Semantic Big Data Architectures

Irena Holubova, Stefanie Scherzinger

Open Journal of Semantic Web (OJSW), 7(1), Pages 1-16, 2020, Downloads: 3676

Full-Text: pdf | URN: urn:nbn:de:101:1-2020011918332157719390 | GNL-LP: 1203064675 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: When semantic big data is managed in commercial settings, with time, the need may arise to integrate and interlink records from various data sources. In this vision paper, we discuss the potential of a new generation of multi-model database systems as data backends in such settings. Discussing a specific example scenario, we show how this family of database systems allows for agile and flexible schema management. We also identify open research challenges in generating sound triple-views from data stored in interlinked models, as a basis for SPARQL querying. We then conclude with a general overview of multi-model data management systems, to provide a wider scope of the problem domain.

BibTex:

    @Article{OJSW_2020v7i1n01_Holubova,
        title     = {NextGen Multi-Model Databases in Semantic Big Data Architectures},
        author    = {Irena Holubova and
                     Stefanie Scherzinger},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2020},
        volume    = {7},
        number    = {1},
        pages     = {1--16},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2020011918332157719390},
        urn       = {urn:nbn:de:101:1-2020011918332157719390},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {When semantic big data is managed in commercial settings, with time, the need may arise to integrate and interlink records from various data sources. In this vision paper, we discuss the potential of a new generation of multi-model database systems as data backends in such settings. Discussing a specific example scenario, we show how this family of database systems allows for agile and flexible schema management. We also identify open research challenges in generating sound triple-views from data stored in interlinked models, as a basis for SPARQL querying. We then conclude with a general overview of multi-model data management systems, to provide a wider scope of the problem domain.}
    }

 Open Access 

On Distributed SPARQL Query Processing Using Triangles of RDF Triples

Hubert Naacke, Olivier Curé

Open Journal of Semantic Web (OJSW), 7(1), Pages 17-32, 2020, Downloads: 2474

Full-Text: pdf | URN: urn:nbn:de:101:1-2020112218333311672109 | GNL-LP: 1221942700 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Knowledge Graphs are providing valuable functionalities, such as data integration and reasoning, to an increasing number of applications in all kinds of companies. These applications partly depend on the efficiency of a Knowledge Graph management system which is often based on the RDF data model and queried with SPARQL. In this context, query performance is preponderant and relies on an optimizer that usually makes an intensive usage of a large set of indexes. Generally, these indexes correspond to different re-orderings of the subject, predicate and object of a triple pattern. In this work, we present a novel approach that considers indexes formed by a frequently encountered basic graph pattern: triangle of triples. We propose dedicated data structures to store these triangles, provide distributed algorithms to discover and materialize them, including inferred triangles, and detail query optimization techniques, including a data partitioning approach for bias data. We provide an implementation that runs on top of Apache Spark and experiment on two real-world RDF data sets. This evaluation emphasizes the performance boost (up to 40x on query processing) that one can obtain by using our approach when facing triangles of triples.

BibTex:

    @Article{OJSW_2020v7i1n02_Cure,
        title     = {On Distributed SPARQL Query Processing Using Triangles of RDF Triples},
        author    = {Hubert Naacke and
                     Olivier Cur\'{e}},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2020},
        volume    = {7},
        number    = {1},
        pages     = {17--32},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2020112218333311672109},
        urn       = {urn:nbn:de:101:1-2020112218333311672109},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Knowledge Graphs are providing valuable functionalities, such as data integration and reasoning, to an increasing number of applications in all kinds of companies. These applications partly depend on the efficiency of a Knowledge Graph management system which is often based on the RDF data model and queried with SPARQL. In this context, query performance is preponderant and relies on an optimizer that usually makes an intensive usage of a large set of indexes. Generally, these indexes correspond to different re-orderings of the subject, predicate and object of a triple pattern. In this work, we present a novel approach that considers indexes formed by a frequently encountered basic graph pattern: triangle of triples. We propose dedicated data structures to store these triangles, provide distributed algorithms to discover and materialize them, including inferred triangles, and detail query optimization techniques, including a data partitioning approach for bias data. We provide an implementation that runs on top of Apache Spark and experiment on two real-world RDF data sets. This evaluation emphasizes the performance boost (up to 40x on query processing) that one can obtain by using our approach when facing triangles of triples.}
    }

 Open Access 

Ten Ways of Leveraging Ontologies for Rapid Natural Language Processing Customization for Multiple Use Cases in Disjoint Domains

Tatiana Erekhinskaya, Marta Tatu, Mithun Balakrishna, Sujal Patel, Dmitry Strebkov, Dan Moldovan

Open Journal of Semantic Web (OJSW), 7(1), Pages 33-51, 2020, Downloads: 3496

Full-Text: pdf | URN: urn:nbn:de:101:1-2020112218332779310329 | GNL-LP: 1221942689 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: With the ever-growing adoption of AI technologies by large enterprises, purely data-driven approaches have dominated the field in the recent years. For a single use case, a development process looks simple: agreeing on an annotation schema, labeling the data, and training the models. As the number of use cases and their complexity increases, the development teams face issues with collective governance of the models, scalability and reusablity of data and models. These issues are widely addressed on the engineering side, but not so much on the knowledge side. Ontologies have been a well-researched approach for capturing knowledge and can be used to augment a data-driven methodology. In this paper, we discuss 10 ways of leveraging ontologies for Natural Language Processing (NLP) and its applications. We use ontologies for rapid customization of a NLP pipeline, ontologyrelated standards to power a rule engine and provide standard output format. We also discuss various use cases for medical, enterprise, financial, legal, and security domains, centered around three NLP-based applications: semantic search, question answering and natural language querying.

BibTex:

    @Article{OJSW_2020v7i1n03_Erekhinskaya,
        title     = {Ten Ways of Leveraging Ontologies for Rapid Natural Language Processing Customization for Multiple Use Cases in Disjoint Domains},
        author    = {Tatiana Erekhinskaya and
                     Marta Tatu and
                     Mithun Balakrishna and
                     Sujal Patel and
                     Dmitry Strebkov and
                     Dan Moldovan},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2020},
        volume    = {7},
        number    = {1},
        pages     = {33--51},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2020112218332779310329},
        urn       = {urn:nbn:de:101:1-2020112218332779310329},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {With the ever-growing adoption of AI technologies by large enterprises, purely data-driven approaches have dominated the field in the recent years. For a single use case, a development process looks simple: agreeing on an annotation schema, labeling the data, and training the models. As the number of use cases and their complexity increases, the development teams face issues with collective governance of the models, scalability and reusablity of data and models. These issues are widely addressed on the engineering side, but not so much on the knowledge side. Ontologies have been a well-researched approach for capturing knowledge and can be used to augment a data-driven methodology. In this paper, we discuss 10 ways of leveraging ontologies for Natural Language Processing (NLP) and its applications. We use ontologies for rapid customization of a NLP pipeline, ontologyrelated standards to power a rule engine and provide standard output format. We also discuss various use cases for medical, enterprise, financial, legal, and security domains, centered around three NLP-based applications: semantic search, question answering and natural language querying.}
    }

 Open Access 

Integrity Proofs for RDF Graphs

Andrew Sutton, Reza Samavi

Open Journal of Semantic Web (OJSW), 6(1), Pages 1-18, 2019, Downloads: 4418

Full-Text: pdf | URN: urn:nbn:de:101:1-2018102818300947746192 | GNL-LP: 117004476X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Representing open datasets with the RDF model is becoming increasingly popular. An important aspect of this data model is that it can utilize the methods of computing cryptographic hashes to verify the integrity of RDF graphs. In this paper, we first develop a number of metrics to compare the state-of-the-art integrity proof methods and then present two new approaches to generate an integrity proof of RDF datasets: (i) semantic-based and (ii) structure-based. The semantic-based approach leverages timestamps (or other inherent notions of ordering) as an indexing key to construct a sorted Merkle tree variation, where timestamps are semantically extractable from the dataset. The structure-based approach utilizes the redundant structure of large RDF datasets to compress the dataset statements prior to generating a variation of a Merkle tree. We provide a theoretical analysis and an experimental evaluation of our two proposed methods. Compared to the Merkle and sorted Merkle tree, the semantic-based approach achieves faster querying performance for large datasets. The structure-based approach is well suited when RDF datasets contain large amounts of semantic redundancies. We also evaluate our methods' resistance to adversarial threats.

BibTex:

    @Article{OJSW_2019v6i1n01_Sutton,
        title     = {Integrity Proofs for RDF Graphs},
        author    = {Andrew Sutton and
                     Reza Samavi},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2019},
        volume    = {6},
        number    = {1},
        pages     = {1--18},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018102818300947746192},
        urn       = {urn:nbn:de:101:1-2018102818300947746192},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Representing open datasets with the RDF model is becoming increasingly popular. An important aspect of this data model is that it can utilize the methods of computing cryptographic hashes to verify the integrity of RDF graphs. In this paper, we first develop a number of metrics to compare the state-of-the-art integrity proof methods and then present two new approaches to generate an integrity proof of RDF datasets: (i) semantic-based and (ii) structure-based. The semantic-based approach leverages timestamps (or other inherent notions of ordering) as an indexing key to construct a sorted Merkle tree variation, where timestamps are semantically extractable from the dataset. The structure-based approach utilizes the redundant structure of large RDF datasets to compress the dataset statements prior to generating a variation of a Merkle tree. We provide a theoretical analysis and an experimental evaluation of our two proposed methods. Compared to the Merkle and sorted Merkle tree, the semantic-based approach achieves faster querying performance for large datasets. The structure-based approach is well suited when RDF datasets contain large amounts of semantic redundancies. We also evaluate our methods' resistance to adversarial threats.}
    }

 Open Access 

Count Distinct Semantic Queries over Multiple Linked Datasets

Bogdan Kostov, Petr Kremen

Open Journal of Semantic Web (OJSW), 5(1), Pages 1-11, 2018, Downloads: 4771

Full-Text: pdf | URN: urn:nbn:de:101:1-201712245426 | GNL-LP: 1149497149 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this paper, we revise count distinct queries and their semantics over datasets with incomplete knowledge, which is a typical case for the linked data integration scenario where datasets are viewed as ontologies. We focus on counting individuals present in the signature of the ontology. Specifically, we investigate the Certain Epistemic Count (CEC) and the Possible Epistemic Count (PEC) interval based semantics. In the case of CEC semantics, we propose an algorithm for its evaluation and we prove its correctness under a practical constraint of the queried ontology. We conduct and report experiments with the implementation of the proposed algorithm. We also prove decidability of the PEC semantics.

BibTex:

    @Article{OJSW_2018v5i1n01_Kostov,
        title     = {Count Distinct Semantic Queries over Multiple Linked Datasets},
        author    = {Bogdan Kostov and
                     Petr Kremen},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2018},
        volume    = {5},
        number    = {1},
        pages     = {1--11},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201712245426},
        urn       = {urn:nbn:de:101:1-201712245426},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this paper, we revise count distinct queries and their semantics over datasets with incomplete knowledge, which is a typical case for the linked data integration scenario where datasets are viewed as ontologies. We focus on counting individuals present in the signature of the ontology. Specifically, we investigate the Certain Epistemic Count (CEC) and the Possible Epistemic Count (PEC) interval based semantics. In the case of CEC semantics, we propose an algorithm for its evaluation and we prove its correctness under a practical constraint of the queried ontology. We conduct and report experiments with the implementation of the proposed algorithm. We also prove decidability of the PEC semantics.}
    }

 Open Access 

FICLONE: Improving DBpedia Spotlight Using Named Entity Recognition and Collective Disambiguation

Mohamed Chabchoub, Michel Gagnon, Amal Zouaq

Open Journal of Semantic Web (OJSW), 5(1), Pages 12-28, 2018, Downloads: 5785, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-2018080519301478077663 | GNL-LP: 1163928461 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this paper we present FICLONE, which aims to improve the performance of DBpedia Spotlight, not only for the task of semantic annotation (SA), but also for the sub-task of named entity disambiguation (NED). To achieve this aim, first we enhance the spotting phase by combining a named entity recognition system (Stanford NER ) with the results of DBpedia Spotlight. Second, we improve the disambiguation phase by using coreference resolution and exploiting a lexicon that associates a list of potential entities of Wikipedia to surface forms. Finally, to select the correct entity among the candidates found for one mention, FICLONE relies on collective disambiguation, an approach that has proved successful in many other annotators, and that takes into consideration the other mentions in the text. Our experiments show that FICLONE not only substantially improves the performance of DBpedia Spotlight for the NED sub-task but also generally outperforms other state-of-the-art systems. For the SA sub-task, FICLONE also outperforms DBpedia Spotlight against the dataset provided by the DBpedia Spotlight team.

BibTex:

    @Article{OJSW_2018v5i1n02_Cbabchoub,
        title     = {FICLONE: Improving DBpedia Spotlight Using Named Entity Recognition and Collective Disambiguation},
        author    = {Mohamed Chabchoub and
                     Michel Gagnon and
                     Amal Zouaq},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2018},
        volume    = {5},
        number    = {1},
        pages     = {12--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018080519301478077663},
        urn       = {urn:nbn:de:101:1-2018080519301478077663},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this paper we present FICLONE, which aims to improve the performance of DBpedia Spotlight, not only for the task of semantic annotation (SA), but also for the sub-task of named entity disambiguation (NED). To achieve this aim, first we enhance the spotting phase by combining a named entity recognition system (Stanford NER ) with the results of DBpedia Spotlight. Second, we improve the disambiguation phase by using coreference resolution and exploiting a lexicon that associates a list of potential entities of Wikipedia to surface forms. Finally, to select the correct entity among the candidates found for one mention, FICLONE relies on collective disambiguation, an approach that has proved successful in many other annotators, and that takes into consideration the other mentions in the text. Our experiments show that FICLONE not only substantially improves the performance of DBpedia Spotlight for the NED sub-task but also generally outperforms other state-of-the-art systems. For the SA sub-task, FICLONE also outperforms DBpedia Spotlight against the dataset provided by the DBpedia Spotlight team.}
    }

 Open Access 

Assessing and Improving Domain Knowledge Representation in DBpedia

Ludovic Font, Amal Zouaq, Michel Gagnon

Open Journal of Semantic Web (OJSW), 4(1), Pages 1-19, 2017, Downloads: 6295

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194949 | GNL-LP: 1132361354 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: With the development of knowledge graphs and the billions of triples generated on the Linked Data cloud, it is paramount to ensure the quality of data. In this work, we focus on one of the central hubs of the Linked Data cloud, DBpedia. In particular, we assess the quality of DBpedia for domain knowledge representation. Our results show that DBpedia has still much room for improvement in this regard, especially for the description of concepts and their linkage with the DBpedia ontology. Based on this analysis, we leverage open relation extraction and the information already available on DBpedia to partly correct the issue, by providing novel relations extracted from Wikipedia abstracts and discovering entity types using the dbo:type predicate. Our results show that open relation extraction can indeed help enrich domain knowledge representation in DBpedia.

BibTex:

    @Article{OJSW_2017v4i1n01_Font,
        title     = {Assessing and Improving Domain Knowledge Representation in DBpedia},
        author    = {Ludovic Font and
                     Amal Zouaq and
                     Michel Gagnon},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {1--19},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194949},
        urn       = {urn:nbn:de:101:1-201705194949},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {With the development of knowledge graphs and the billions of triples generated on the Linked Data cloud, it is paramount to ensure the quality of data. In this work, we focus on one of the central hubs of the Linked Data cloud, DBpedia. In particular, we assess the quality of DBpedia for domain knowledge representation. Our results show that DBpedia has still much room for improvement in this regard, especially for the description of concepts and their linkage with the DBpedia ontology. Based on this analysis, we leverage open relation extraction and the information already available on DBpedia to partly correct the issue, by providing novel relations extracted from Wikipedia abstracts and discovering entity types using the dbo:type predicate. Our results show that open relation extraction can indeed help enrich domain knowledge representation in DBpedia.}
    }

 Open Access 

Scalable Generation of Type Embeddings Using the ABox

Mayank Kejriwal, Pedro Szekely

Open Journal of Semantic Web (OJSW), 4(1), Pages 20-34, 2017, Downloads: 4270

Full-Text: pdf | URN: urn:nbn:de:101:1-2017100112160 | GNL-LP: 1140718193 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Structured knowledge bases gain their expressive power from both the ABox and TBox. While the ABox is rich in data, the TBox contains the ontological assertions that are often necessary for logical inference. The crucial links between the ABox and the TBox are served by is-a statements (formally a part of the ABox) that connect instances to types, also referred to as classes or concepts. Latent space embedding algorithms, such as RDF2Vec and TransE, have been used to great effect to model instances in the ABox. Such algorithms work well on large-scale knowledge bases like DBpedia and Geonames, as they are robust to noise and are low-dimensional and real-valued. In this paper, we investigate a supervised algorithm for deriving type embeddings in the same latent space as a given set of entity embeddings. We show that our algorithm generalizes to hundreds of types, and via incremental execution, achieves near-linear scaling on graphs with millions of instances and facts. We also present a theoretical foundation for our proposed model, and the means of validating the model. The empirical utility of the embeddings is illustrated on five partitions of the English DBpedia ABox. We use visualization and clustering to show that our embeddings are in good agreement with the manually curated TBox. We also use the embeddings to perform a soft clustering on 4 million DBpedia instances in terms of the 415 types explicitly participating in is-a relationships in the DBpedia ABox. Lastly, we present a set of results obtained by using the embeddings to recommend types for untyped instances. Our method is shown to outperform another feature-agnostic baseline while achieving 15x speedup without any growth in memory usage.

BibTex:

    @Article{OJSW_2017v4i1n02_Kejriwal,
        title     = {Scalable Generation of Type Embeddings Using the ABox},
        author    = {Mayank Kejriwal and
                     Pedro Szekely},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {20--34},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017100112160},
        urn       = {urn:nbn:de:101:1-2017100112160},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Structured knowledge bases gain their expressive power from both the ABox and TBox. While the ABox is rich in data, the TBox contains the ontological assertions that are often necessary for logical inference. The crucial links between the ABox and the TBox are served by is-a statements (formally a part of the ABox) that connect instances to types, also referred to as classes or concepts. Latent space embedding algorithms, such as RDF2Vec and TransE, have been used to great effect to model instances in the ABox. Such algorithms work well on large-scale knowledge bases like DBpedia and Geonames, as they are robust to noise and are low-dimensional and real-valued. In this paper, we investigate a supervised algorithm for deriving type embeddings in the same latent space as a given set of entity embeddings. We show that our algorithm generalizes to hundreds of types, and via incremental execution, achieves near-linear scaling on graphs with millions of instances and facts. We also present a theoretical foundation for our proposed model, and the means of validating the model. The empirical utility of the embeddings is illustrated on five partitions of the English DBpedia ABox. We use visualization and clustering to show that our embeddings are in good agreement with the manually curated TBox. We also use the embeddings to perform a soft clustering on 4 million DBpedia instances in terms of the 415 types explicitly participating in is-a relationships in the DBpedia ABox. Lastly, we present a set of results obtained by using the embeddings to recommend types for untyped instances. Our method is shown to outperform another feature-agnostic baseline while achieving 15x speedup without any growth in memory usage.}
    }

 Open Access 

A Semantic Safety Check System for Emergency Management

Yogesh Pandey, Srividya K. Bansal

Open Journal of Semantic Web (OJSW), 4(1), Pages 35-50, 2017, Downloads: 5928, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201711266890 | GNL-LP: 1147193460 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There has been an exponential growth and availability of both structured and unstructured data that can be leveraged to provide better emergency management in case of natural disasters and humanitarian crises. This paper is an extension of a semantics-based web application for safety check, which uses of semantic web technologies to extract different kinds of relevant data about a natural disaster and alerts its users. The goal of this work is to design and develop a knowledge intensive application that identifies those people that may have been affected due to natural disasters or man-made disasters at any geographical location and notify them with safety instructions. This involves extraction of data from various sources for emergency alerts, weather alerts, and contacts data. The extracted data is integrated using a semantic data model and transformed into semantic data. Semantic reasoning is done through rules and queries. This system is built using front-end web development technologies and at the back-end using semantic web technologies such as RDF, OWL, SPARQL, Apache Jena, TDB, and Apache Fuseki server. We present the details of the overall approach, process of data collection and transformation and the system built. This extended version includes a detailed discussion of the semantic reasoning module, research challenges in building this software system, related work in this area, and future research directions including the incorporation of geospatial components and standards.

BibTex:

    @Article{OJSW_2017v4i1n03_Pandey,
        title     = {A Semantic Safety Check System for Emergency Management},
        author    = {Yogesh Pandey and
                     Srividya K. Bansal},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {35--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201711266890},
        urn       = {urn:nbn:de:101:1-201711266890},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There has been an exponential growth and availability of both structured and unstructured data that can be leveraged to provide better emergency management in case of natural disasters and humanitarian crises. This paper is an extension of a semantics-based web application for safety check, which uses of semantic web technologies to extract different kinds of relevant data about a natural disaster and alerts its users. The goal of this work is to design and develop a knowledge intensive application that identifies those people that may have been affected due to natural disasters or man-made disasters at any geographical location and notify them with safety instructions. This involves extraction of data from various sources for emergency alerts, weather alerts, and contacts data. The extracted data is integrated using a semantic data model and transformed into semantic data. Semantic reasoning is done through rules and queries. This system is built using front-end web development technologies and at the back-end using semantic web technologies such as RDF, OWL, SPARQL, Apache Jena, TDB, and Apache Fuseki server. We present the details of the overall approach, process of data collection and transformation and the system built. This extended version includes a detailed discussion of the semantic reasoning module, research challenges in building this software system, related work in this area, and future research directions including the incorporation of geospatial components and standards.}
    }

 Open Access 

Hierarchical Multi-Label Classification Using Web Reasoning for Large Datasets

Rafael Peixoto, Thomas Hassan, Christophe Cruz, Aurélie Bertaux, Nuno Silva

Open Journal of Semantic Web (OJSW), 3(1), Pages 1-15, 2016, Downloads: 6855, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194907 | GNL-LP: 113236129X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Extracting valuable data among large volumes of data is one of the main challenges in Big Data. In this paper, a Hierarchical Multi-Label Classification process called Semantic HMC is presented. This process aims to extract valuable data from very large data sources, by automatically learning a label hierarchy and classifying data items.The Semantic HMC process is composed of five scalable steps, namely Indexation, Vectorization, Hierarchization, Resolution and Realization. The first three steps construct automatically a label hierarchy from statistical analysis of data. This paper focuses on the last two steps which perform item classification according to the label hierarchy. The process is implemented as a scalable and distributed application, and deployed on a Big Data platform. A quality evaluation is described, which compares the approach with multi-label classification algorithms from the state of the art dedicated to the same goal. The Semantic HMC approach outperforms state of the art approaches in some areas.

BibTex:

    @Article{OJSW_2016v3i1n01_Peixoto,
        title     = {Hierarchical Multi-Label Classification Using Web Reasoning for Large Datasets},
        author    = {Rafael Peixoto and
                     Thomas Hassan and
                     Christophe Cruz and
                     Aur\'{e}lie Bertaux and
                     Nuno Silva},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194907},
        urn       = {urn:nbn:de:101:1-201705194907},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Extracting valuable data among large volumes of data is one of the main challenges in Big Data. In this paper, a Hierarchical Multi-Label Classification process called Semantic HMC is presented. This process aims to extract valuable data from very large data sources, by automatically learning a label hierarchy and classifying data items.The Semantic HMC process is composed of five scalable steps, namely Indexation, Vectorization, Hierarchization, Resolution and Realization. The first three steps construct automatically a label hierarchy from statistical analysis of data. This paper focuses on the last two steps which perform item classification according to the label hierarchy. The process is implemented as a scalable and distributed application, and deployed on a Big Data platform. A quality evaluation is described, which compares the approach with multi-label classification algorithms from the state of the art dedicated to the same goal. The Semantic HMC approach outperforms state of the art approaches in some areas.}
    }

 Open Access 

A Semantic Question Answering Framework for Large Data Sets

Marta Tatu, Mithun Balakrishna, Steven Werner, Tatiana Erekhinskaya, Dan Moldovan

Open Journal of Semantic Web (OJSW), 3(1), Pages 16-31, 2016, Downloads: 12216, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194921 | GNL-LP: 1132361338 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.

BibTex:

    @Article{OJSW_2016v3i1n02_Tatu,
        title     = {A Semantic Question Answering Framework for Large Data Sets},
        author    = {Marta Tatu and
                     Mithun Balakrishna and
                     Steven Werner and
                     Tatiana Erekhinskaya and
                     Dan Moldovan},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {16--31},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194921},
        urn       = {urn:nbn:de:101:1-201705194921},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.}
    }

 Open Access 

OnGIS: Semantic Query Broker for Heterogeneous Geospatial Data Sources

Marek Smid, Petr Kremen

Open Journal of Semantic Web (OJSW), 3(1), Pages 32-50, 2016, Downloads: 5488, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194936 | GNL-LP: 1132361346 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Querying geospatial data from multiple heterogeneous sources backed by different management technologies poses an interesting problem in the data integration and in the subsequent result interpretation. This paper proposes broker techniques for answering a user's complex spatial query: finding relevant data sources (from a catalogue of data sources) capable of answering the query, eventually splitting the query and finding relevant data sources for the query parts, when no single source suffices. For the purpose, we describe each source with a set of prototypical queries that are algorithmically arranged into a lattice, which makes searching efficient. The proposed algorithms leverage GeoSPARQL query containment enhanced with OWL 2 QL semantics. A prototype is implemented in a system called OnGIS.

BibTex:

    @Article{OJSW_2016v3i1n03_Smid,
        title     = {OnGIS: Semantic Query Broker for Heterogeneous Geospatial Data Sources},
        author    = {Marek Smid and
                     Petr Kremen},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {32--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194936},
        urn       = {urn:nbn:de:101:1-201705194936},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Querying geospatial data from multiple heterogeneous sources backed by different management technologies poses an interesting problem in the data integration and in the subsequent result interpretation. This paper proposes broker techniques for answering a user's complex spatial query: finding relevant data sources (from a catalogue of data sources) capable of answering the query, eventually splitting the query and finding relevant data sources for the query parts, when no single source suffices. For the purpose, we describe each source with a set of prototypical queries that are algorithmically arranged into a lattice, which makes searching efficient. The proposed algorithms leverage GeoSPARQL query containment enhanced with OWL 2 QL semantics. A prototype is implemented in a system called OnGIS.}
    }

 Open Access 

Semantic and Web: The Semantic Part

Sven Groppe, Paulo Rupino da Cunha

Open Journal of Semantic Web (OJSW), 2(1), Pages 1-3, 2015, Downloads: 9860, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194864 | GNL-LP: 1132361222 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The Web is everywhere in daily life. Business is not possible any more without the fast communication through the web. The knowledge of the humans is reflected in the information accessible in the web. New challenges occur with the flood of information and electronic possibilities for the human being. The current World Wide Web enables an easy, instant access to a vast amount of online information. However, the content in the Web is typically for human consumption, and is not tailored to be machine-processed. The Semantic Web, which is intended to establish a machine-understandable web, thereby offers a promising and potential solution to mining and analyzing web content. The Semantic Web is currently changing from an emergent trend to a technology used in complex real-world applications. This part of the special issue "Semantic and Web" especially investigates how semantic technologies can help the human being to open the new possibilities of the web. The papers, which contribute more to Web technologies, are published in Open Journal of Web Technologies (OJWT).

BibTex:

    @Article{OJSW_2015v2i1n01e_Groppe,
        title     = {Semantic and Web: The Semantic Part},
        author    = {Sven Groppe and
                     Paulo Rupino da Cunha},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {1--3},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194864},
        urn       = {urn:nbn:de:101:1-201705194864},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The Web is everywhere in daily life. Business is not possible any more without the fast communication through the web. The knowledge of the humans is reflected in the information accessible in the web. New challenges occur with the flood of information and electronic possibilities for the human being. The current World Wide Web enables an easy, instant access to a vast amount of online information. However, the content in the Web is typically for human consumption, and is not tailored to be machine-processed. The Semantic Web, which is intended to establish a machine-understandable web, thereby offers a promising and potential solution to mining and analyzing web content. The Semantic Web is currently changing from an emergent trend to a technology used in complex real-world applications. This part of the special issue "Semantic and Web" especially investigates how semantic technologies can help the human being to open the new possibilities of the web. The papers, which contribute more to Web technologies, are published in Open Journal of Web Technologies (OJWT).}
    }

 Open Access 

BEAUFORD: A Benchmark for Evaluation of Formalisation of Definitions in OWL

Cheikh Kacfah Emani, Catarina Ferreira Da Silva, Bruno Fiés, Parisa Ghodous

Open Journal of Semantic Web (OJSW), 2(1), Pages 4-15, 2015, Downloads: 5236, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194879 | GNL-LP: 1132361257 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this paper we present BEAUFORD, a benchmark for methods which aim to provide formal expressions of concepts using the natural language (NL) definition of these concepts. Adding formal expressions of concepts to a given ontology allows reasoners to infer more useful pieces of information or to detect inconsistencies in this given ontology. To the best of our knowledge, BEAUFORD is the first benchmark to tackle this ontology enrichment problem. BEAUFORD allows the breaking down of a given formalisation approach by identifying its key features. In addition, BEAUFORD provides strong mechanisms to evaluate efficiently an approach even in case of ambiguity which is a major challenge in formalisation of NL resources. Indeed, BEAUFORD takes into account the fact that a given NL phrase can be formalised in many ways. Hence, it proposes a suitable specification to represent these multiple formalisations. Taking advantage of this specification, BEAUFORD redefines classical precision and recall and introduces other metrics to take into account the fact that there is not only one unique way to formalise a definition. Finally, BEAUFORD comprises a well-suited dataset to concretely judge of the efficiency of methods of formalisation. Using BEAUFORD, current approaches of formalisation of definitions can be compared accurately using a suitable gold standard.

BibTex:

    @Article{OJSW_2015v2i1n02_Kachfah,
        title     = {BEAUFORD: A Benchmark for Evaluation of Formalisation of Definitions in OWL},
        author    = {Cheikh Kacfah Emani and
                     Catarina Ferreira Da Silva and
                     Bruno Fi\'{e}s and
                     Parisa Ghodous},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {4--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194879},
        urn       = {urn:nbn:de:101:1-201705194879},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this paper we present BEAUFORD, a benchmark for methods which aim to provide formal expressions of concepts using the natural language (NL) definition of these concepts. Adding formal expressions of concepts to a given ontology allows reasoners to infer more useful pieces of information or to detect inconsistencies in this given ontology. To the best of our knowledge, BEAUFORD is the first benchmark to tackle this ontology enrichment problem. BEAUFORD allows the breaking down of a given formalisation approach by identifying its key features. In addition, BEAUFORD provides strong mechanisms to evaluate efficiently an approach even in case of ambiguity which is a major challenge in formalisation of NL resources. Indeed, BEAUFORD takes into account the fact that a given NL phrase can be formalised in many ways. Hence, it proposes a suitable specification to represent these multiple formalisations. Taking advantage of this specification, BEAUFORD redefines classical precision and recall and introduces other metrics to take into account the fact that there is not only one unique way to formalise a definition. Finally, BEAUFORD comprises a well-suited dataset to concretely judge of the efficiency of methods of formalisation. Using BEAUFORD, current approaches of formalisation of definitions can be compared accurately using a suitable gold standard.}
    }

 Open Access 

Ontology Evolution Using Ontology Templates

Miroslav Blasko, Petr Kremen, Zdenek Kouba

Open Journal of Semantic Web (OJSW), 2(1), Pages 16-29, 2015, Downloads: 5329, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194898 | GNL-LP: 1132361281 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Evolving ontologies by domain experts is difficult and typically cannot be performed without the assistance of an ontology engineer. This process takes long time and often recurrent modeling errors have to be resolved. This paper proposes a technique for creating controlled ontology evolution scenarios that ensure consistency of the possible ontology evolution and give guarrantees to the domain expert that his/her updates do not cause inconsistency. We introduce ontology templates that formalize the notion of controlled evolution and define ontology template consistency checking service together with a consistency checking algorithm. We prove correctness and demonstate the practical use of the techniques in two scenarios.

BibTex:

    @Article{OJSW_2015v2i1n03_Blasko,
        title     = {Ontology Evolution Using Ontology Templates},
        author    = {Miroslav Blasko and
                     Petr Kremen and
                     Zdenek Kouba},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {16--29},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194898},
        urn       = {urn:nbn:de:101:1-201705194898},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Evolving ontologies by domain experts is difficult and typically cannot be performed without the assistance of an ontology engineer. This process takes long time and often recurrent modeling errors have to be resolved. This paper proposes a technique for creating controlled ontology evolution scenarios that ensure consistency of the possible ontology evolution and give guarrantees to the domain expert that his/her updates do not cause inconsistency. We introduce ontology templates that formalize the notion of controlled evolution and define ontology template consistency checking service together with a consistency checking algorithm. We prove correctness and demonstate the practical use of the techniques in two scenarios.}
    }

 Open Access 

Distributed Join Approaches for W3C-Conform SPARQL Endpoints

Sven Groppe, Dennis Heinrich, Stefan Werner

Open Journal of Semantic Web (OJSW), 2(1), Pages 30-52, 2015, Downloads: 10461, Citations: 6

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194910 | GNL-LP: 1132361303 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Presentation: Video

Abstract: Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.

BibTex:

    @Article{OJSW_2015v2i1n04_Groppe,
        title     = {Distributed Join Approaches for W3C-Conform SPARQL Endpoints},
        author    = {Sven Groppe and
                     Dennis Heinrich and
                     Stefan Werner},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {30--52},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194910},
        urn       = {urn:nbn:de:101:1-201705194910},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.}
    }

 Open Access 

Developing Knowledge Models of Social Media: A Case Study on LinkedIn

Jinwu Li, Vincent Wade, Melike Sah

Open Journal of Semantic Web (OJSW), 1(2), Pages 1-24, 2014, Downloads: 13200

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194841 | GNL-LP: 1132361206 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: User Generated Content (UGC) exchanged via large Social Network is considered a very important knowledge source about all aspects of the social engagements (e.g. interests, events, personal information, personal preferences, social experience, skills etc.). However this data is inherently unstructured or semi-structured. In this paper, we describe the results of a case study on LinkedIn Ireland public profiles. The study investigated how the available knowledge could be harvested from LinkedIn in a novel way by developing and applying a reusable knowledge model using linked open data vocabularies and semantic web. In addition, the paper discusses the crawling and data normalisation strategies that we developed, so that high quality metadata could be extracted from the LinkedIn public profiles. Apart from the search engine in LinkedIn.com itself, there are no well known publicly available endpoints that allow users to query knowledge concerning the interests of individuals on LinkedIn. In particular, we present a system that extracts and converts information from raw web pages of LinkedIn public profiles into a machine-readable, interoperable format using data mining and Semantic Web technologies. The outcomes of our research can be summarized as follows: (1) A reusable knowledge model which can represent LinkedIn public users and company profiles using linked data vocabularies and structured data, (2) a public SPARQL endpoint to access structured data about Irish industry and public profiles, (3) a scalable data crawling strategy and mashup based data normalisation approach. The proposed data mining and knowledge representation proposed in this paper are evaluated in four ways: (1) We evaluate metadata quality using automated techniques, such as data completeness and data linkage. (2) Data accuracy is evaluated via user studies. In particular, accuracy is evaluated by comparison of manually entered metadata fields and the metadata which was automatically extracted. (3) User perceived metadata quality is measured by asking users to rate the automatically extracted metadata in user studies. (4) Finally, the paper discusses how the extracted metadata suits for a user interface design. Overall, the evaluations show that the extracted metadata is of high quality and meets the requirements of a data visualisation user interface.

BibTex:

    @Article{OJSW-v1i2n01_Li,
        title     = {Developing Knowledge Models of Social Media: A Case Study on LinkedIn},
        author    = {Jinwu Li and
                     Vincent Wade and
                     Melike Sah},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {1--24},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194841},
        urn       = {urn:nbn:de:101:1-201705194841},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {User Generated Content (UGC) exchanged via large Social Network is considered a very important knowledge source about all aspects of the social engagements (e.g. interests, events, personal information, personal preferences, social experience, skills etc.). However this data is inherently unstructured or semi-structured. In this paper, we describe the results of a case study on LinkedIn Ireland public profiles. The study investigated how the available knowledge could be harvested from LinkedIn in a novel way by developing and applying a reusable knowledge model using linked open data vocabularies and semantic web. In addition, the paper discusses the crawling and data normalisation strategies that we developed, so that high quality metadata could be extracted from the LinkedIn public profiles. Apart from the search engine in LinkedIn.com itself, there are no well known publicly available endpoints that allow users to query knowledge concerning the interests of individuals on LinkedIn. In particular, we present a system that extracts and converts information from raw web pages of LinkedIn public profiles into a machine-readable, interoperable format using data mining and Semantic Web technologies. The outcomes of our research can be summarized as follows: (1) A reusable knowledge model which can represent LinkedIn public users and company profiles using linked data vocabularies and structured data, (2) a public SPARQL endpoint to access structured data about Irish industry and public profiles, (3) a scalable data crawling strategy and mashup based data normalisation approach. The proposed data mining and knowledge representation proposed in this paper are evaluated in four ways: (1) We evaluate metadata quality using automated techniques, such as data completeness and data linkage. (2) Data accuracy is evaluated via user studies. In particular, accuracy is evaluated by comparison of manually entered metadata fields and the metadata which was automatically extracted. (3) User perceived metadata quality is measured by asking users to rate the automatically extracted metadata in user studies. (4) Finally, the paper discusses how the extracted metadata suits for a user interface design. Overall, the evaluations show that the extracted metadata is of high quality and meets the requirements of a data visualisation user interface.}
    }

 Open Access 

P-LUPOSDATE: Using Precomputed Bloom Filters to Speed Up SPARQL Processing in the Cloud

Sven Groppe, Thomas Kiencke, Stefan Werner, Dennis Heinrich, Marc Stelzner, Le Gruenwald

Open Journal of Semantic Web (OJSW), 1(2), Pages 25-55, 2014, Downloads: 12892, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194858 | GNL-LP: 1132361214 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Presentation: Video

Abstract: Increasingly data on the Web is stored in the form of Semantic Web data. Because of today's information overload, it becomes very important to store and query these big datasets in a scalable way and hence in a distributed fashion. Cloud Computing offers such a distributed environment with dynamic reallocation of computing and storing resources based on needs. In this work we introduce a scalable distributed Semantic Web database in the Cloud. In order to reduce the number of (unnecessary) intermediate results early, we apply bloom filters. Instead of computing bloom filters, a time-consuming task during query processing as it has been done traditionally, we precompute the bloom filters as much as possible and store them in the indices besides the data. The experimental results with data sets up to 1 billion triples show that our approach speeds up query processing significantly and sometimes even reduces the processing time to less than half.

BibTex:

    @Article{OJSW-v1i2n02_Groppe,
        title     = {P-LUPOSDATE: Using Precomputed Bloom Filters to Speed Up SPARQL Processing in the Cloud},
        author    = {Sven Groppe and
                     Thomas Kiencke and
                     Stefan Werner and
                     Dennis Heinrich and
                     Marc Stelzner and
                     Le Gruenwald},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {25--55},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194858},
        urn       = {urn:nbn:de:101:1-201705194858},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Increasingly data on the Web is stored in the form of Semantic Web data. Because of today's information overload, it becomes very important to store and query these big datasets in a scalable way and hence in a distributed fashion. Cloud Computing offers such a distributed environment with dynamic reallocation of computing and storing resources based on needs. In this work we introduce a scalable distributed Semantic Web database in the Cloud. In order to reduce the number of (unnecessary) intermediate results early, we apply bloom filters. Instead of computing bloom filters, a time-consuming task during query processing as it has been done traditionally, we precompute the bloom filters as much as possible and store them in the indices besides the data. The experimental results with data sets up to 1 billion triples show that our approach speeds up query processing significantly and sometimes even reduces the processing time to less than half.}
    }

 Open Access 

MapReduce-based Solutions for Scalable SPARQL Querying

José M. Giménez-Garcia, Javier D. Fernández, Miguel A. Martínez-Prieto

Open Journal of Semantic Web (OJSW), 1(1), Pages 1-18, 2014, Downloads: 10378, Citations: 10

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194824 | GNL-LP: 1132361168 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.

BibTex:

    @Article{OJSW-v1i1n02_Garcia,
        title     = {MapReduce-based Solutions for Scalable SPARQL Querying},
        author    = {Jos\'{e} M. Gim\'{e}nez-Garcia and
                     Javier D. Fern\'{a}ndez and
                     Miguel A. Mart\'{i}nez-Prieto},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {1--18},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194824},
        urn       = {urn:nbn:de:101:1-201705194824},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.}
    }

 Open Access 

BioSStore: A Client Interface for a Repository of Semantically Annotated Bioinformatics Web Services

Ismael Navas-Delgado, José F. Aldana-Montes

Open Journal of Semantic Web (OJSW), 1(1), Pages 19-29, 2014, Downloads: 9881, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194836 | GNL-LP: 1132361176 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Bioinformatics has shown itself to be a domain in which Web services are being used extensively. In this domain, simple but real services are being developed. Thus, there are huge repositories of real services available (for example BioMOBY main repository includes more than 1500 services). Besides, bioinformatics repositories usually have active communities using and working on improvements. However, these kinds of repositories do not exploit the full potential of Web services (and SOA, Service Oriented Applications, in general). On the other hand, sophisticated technologies have been proposed to improve SOA, including the annotation on Web services to explicitly describe them. However, these approaches are lacking in repositories with real services. In the work presented here, we address the drawbacks present in bioinformatics services and try to improve the current semantic model by introducing the use of the W3C standard Semantic Annotations for WSDL and XML Schema (SAWSDL) and related proposals (WSMO Lite). This paper focuses on a user interface that takes advantage of a repository of semantically annotated bioinformatics Web services. In this way, we exploit semantics for the discovery of Web services, showing how the use of semantics will improve the user searches. The BioSStore is available at http://biosstore.khaos.uma.es. This portal will contain also future developments of this proposal.

BibTex:

    @Article{OJSW-v1i1n03_Delgado,
        title     = {BioSStore: A Client Interface for a Repository of Semantically Annotated Bioinformatics Web Services},
        author    = {Ismael Navas-Delgado and
                     Jos\'{e} F. Aldana-Montes},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {19--29},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194836},
        urn       = {urn:nbn:de:101:1-201705194836},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Bioinformatics has shown itself to be a domain in which Web services are being used extensively. In this domain, simple but real services are being developed. Thus, there are huge repositories of real services available (for example BioMOBY main repository includes more than 1500 services). Besides, bioinformatics repositories usually have active communities using and working on improvements. However, these kinds of repositories do not exploit the full potential of Web services (and SOA, Service Oriented Applications, in general). On the other hand, sophisticated technologies have been proposed to improve SOA, including the annotation on Web services to explicitly describe them. However, these approaches are lacking in repositories with real services. In the work presented here, we address the drawbacks present in bioinformatics services and try to improve the current semantic model by introducing the use of the W3C standard Semantic Annotations for WSDL and XML Schema (SAWSDL) and related proposals (WSMO Lite). This paper focuses on a user interface that takes advantage of a repository of semantically annotated bioinformatics Web services. In this way, we exploit semantics for the discovery of Web services, showing how the use of semantics will improve the user searches. The BioSStore is available at http://biosstore.khaos.uma.es. This portal will contain also future developments of this proposal.}
    }

OJSW Publication Fees

All articles published by RonPub are fully open access and online available to readers free of charge. To be able to provide open access journals, RonPub defrays the costs (induced by processing and editing of manuscripts, provision and maintenance of infrastructure, and routine operation and management of journals) by charging an one-time publication fee for each accepted article. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors from low-income countries. Authors who do not have funds to cover publication fees should submit an application during the submission process. Applications of waiver will be examined on a case by case basis. The scientific committee members of RonPub are entitled a partial waiver of the standard publication fees as reward for their work. 

  • Standard publication fee: 338 Euro (excluding tax).
  • Authors from the low-income countries: 71% waiver of the standard publication fee. (Note: The list is subject to change based on the data of the World Bank Group.):
    Afghanistan, Bangladesh, Benin, Bhutan, Bolivia (Plurinational State of), Burkina Faso, Burundi, Cambodia, Cameroon, Central African Republic, Chad, Comoros, Congo (Democratic Republic), Côte d'Ivoire, Djibouti, Eritrea, Ethiopia, Gambia, Ghana, Guinea, Guinea-Bissau, Haiti, Honduras, Kenya, Kiribati, Korea (Democratic People’s Republic), Kosovo, Kyrgyz Republic, Lao (People’s Democratic Republic), Lesotho, Liberia, Madagascar, Malawi, Mali, Mauritania, Micronesia (Federated States of), Moldova, Morocco, Mozambique, Myanmar, Nepal, Nicaragua, Niger, Nigeria, Papua New Guinea, Rwanda, Senegal, Sierra Leone, Solomon Islands, Somalia, South Sudan, Sudan, Swaziland, Syrian Arab Republic, São Tomé and Principe, Tajikistan, Tanzania, Timor-Leste, Togo, Uganda, Uzbekistan, Vietnam, West Bank and Gaza Strip, Yemen (Republic), Zambia, Zimbabwe
  • Scientific committee members: 25% waiver of the standard publication fee.
  • Guest editors and reviewers: 25% waiver of the standard publication fee for one year.

Payments are subject to tax. A German VAT (value-added tax) at 19% will be charged if applicable. US and Canadian customers need to provide their sales tax number and their certificate of incorporation to be exempt from the VAT charge; European Union customers (not German customers) need to provide their VAT to be exempt from the VAT charge. Customers from Germany and other countries will be charged with the VAT charge. Individuals are not eligible for tax exempt status.

Editors and reviewers have no access to payment information. The inability to pay will not influence the decision to publish a paper; decisions to publish are only based on the quality of work and the editorial criteria.

OJSW Indexing

In order for our publications getting widely abstracted, indexed and cited, the following methods are employed:

  • Various meta tags are embedded in each publication webpage, including Google Scholar Tags, Dublic Core, EPrints, BE Press and Prism. This enables crawlers of e.g. Google Scholar to discover and index our publications.
  • Different metadata export formats are provided for each article, including BibTex, XML, RSS and RDF. This makes readers to cite our papers easily.
  • An OAI-PMH interface is implemented, which facilitates our article metadata harvesting by indexing services and databases.

The paper Getting Indexed by Bibliographic Databases in the Area of Computer Science provides a comprehensive survey on indexing formats, techniques and databases. We will also continue our efforts on dissemination and indexing of our publications.

OJSW has been indexed by the following libraries and bibliographic databases:

Submission to Open Journal of Semantic Web (OJSW)

Please submit your manuscript by carefully filling in the information in the following web form. If there technical problems, you may also submit your manuscript by sending the information and the manuscript to .

Submission to Regular or Special Issue

Please specify if the paper is submitted to a regular issue or one of the special issues:

Type of Paper

Please specify the type of your paper here. Please check Aims & Scope if you are not sure of which type your paper is.





Traditional or Open & Transparent Review

Besides traditional reviews, OJSW offers the possibility of an open & transparent review process. Please specify your type of review here. Please check Author Guidelines for further information about the types of reviews.



If you wish that the reviewers are not aware of your name, please submit a blinded manuscript leaving out identifiable information like authors' names and affiliations.

Title

Please specify the title of your paper here:

Abstract

Please copy & paste the abstract of your paper here:

Authors

Please provide necessary information about the authors of your submission here. Please mark the contact authors, which will be contacted for the main correspondence.

Author 1:


Name:
EMail:
Affiliation:
Webpage (optional):

Author 2:


Name:
EMail:
Affiliation:
Webpage (optional):

Author 3:


Name:
EMail:
Affiliation:
Webpage (optional):

Add Author

Conflicts of Interest

Please specify any conflicts of interests here. Conflicts of interest occur e.g. if the author and the editor are colleagues, work or worked closely together, or are relatives.

Suggestion of Editors (Optional)

You can suggest editors (with scientific background of the topics addressed in your submission) for handling your submission. The Editor-in-Chief may consider your suggestion, but may also choose another editor.

Suggestion of Reviewers (Optional)

You can suggest reviewers (with scientific background of the topics addressed in your submission) for handling your submission. The editor of your submission may consider your suggestion, but may also choose other or additional reviewers in order to guarantee an independent review process.

Reviewer 1:

Name:
EMail:
Affiliation:
Webpage (optional):

Reviewer 2:

Name:
EMail:
Affiliation:
Webpage (optional):

Reviewer 3:

Name:
EMail:
Affiliation:
Webpage (optional):

Add Reviewer

Paper upload

Please choose your manuscript file for uploading. It should be a pdf file. Please take care that your manuscript is formatted according to the templates provided by RonPub, which are available at our Author Guidelines page. Manuscripts not formatted according to our RonPub templates will be rejected without review!

If you wish that the reviewer are not aware of your name, please submit a blinded manuscript leaving out identifiable information like authors' names and affiliations.

Choose PDF file...

Chosen PDF file: none

Captcha

Please fill in the characters of the image into the text field under the image.

Captcha

Submission

For Authors

Manuscript Preparation

Authors should first read the author guidelines of the corresponding journal. Manuscripts must be prepared using the manuscript template of the respective journal. It is available as word and latex version for download at the Author Guidelines of the corresponding journal page. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts via the submit page of the corresponding journal. Authors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as PDF file and word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts within a few days. Subsequent enquiries concerning paper progress should be made to the corresponding editorial office (see individual journal webpage for concrete contact information).

Review Procedure

RonPub is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in RonPub journals are strictly and thoroughly peer-reviewed. When a manuscript is submitted to a RonPub journal, the editor-in-chief of the journal assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A new round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

For Editors

About RonPub

RonPub is academic publisher of online, open access, peer-reviewed journals. All articles published by RonPub is fully open access and online available to readers free of charge.

RonPub is located in Lübeck, Germany. Lübeck is a beautiful harbour city, 60 kilometer away from Hamburg.

Editor-in-Chief Responsibilities

The Editor-in-Chief of each journal is mainly responsible for the scientific quality of the journal and for assisting in the management of the journal. The Editor-in-Chief suggests topics for the journal, invites distinguished scientists to join the editorial board, oversees the editorial process, and makes the final decision whether a paper can be published after peer-review and revisions.

As a reward for the work of a Editor-in-Chief, the Editor-in-Chief will obtain a 25% discount of the standard publication fee for her/his papers (the Editor-in-Chief is one of authors) published in any of RonPub journals.

Editors’ Responsibilities

Editors assist the Editor-in-Chief in the scientific quality and in decision about topics of the journal. Editors are also encouraged to help to promote the journal among their peers and at conferences. An editor invites at least three reviewers to review a manuscript, but may also review him-/herself the manuscript. After carefully evaluating the review reports and the manuscript itself, the editor makes a commendation about the status of the manuscript. The editor's evaluation as well as the review reports are then sent to EiC, who make the final decision whether a paper can be published after peer-review and revisions. 

The communication with Editorial Board members is done primarily by E-mail, and the Editors are expected to respond within a few working days on any question sent by the Editorial Office so that manuscripts can be processed in a timely fashion. If an editor does not respond or cannot process the work in time, and under some special situations, the editorial office may forward the requests to the Publishers or Editor-in-Chief, who will take the decision directly.

As a reward for the work of editors, an editor will obtain a 25% discount of the standard publication fee for her/his papers (the editor is one of authors) published in any of RonPub journals.

Guest Editors’ Responsibilities

Guest Editors are responsible of the scientific quality of their special issues. Guest Editors will be in charge of inviting papers, of supervising the refereeing process (each paper should be reviewed at least by three reviewers), and of making decisions on the acceptance of manuscripts submitted to their special issue. As regular issues, all accepted papers by (guest) editors will be sent to the EiC of the journal, who will check the quality of the papers, and make the final decsion whether a paper can be published.

Our editorial office will have the right directly asking authors to revise their paper if there are quality issues, e.g. weak quality of writing, and missing information. Authors are required to revise their paper several times if necessary. A paper accepted by it's quest editor may be rejected by the EiC of the journal due to a low quality. However, this occurs only when authors do not really take efforts to revise their paper. A high-quality publication needs the common efforts from the journal, reviewers, editors, editor-in-chief and authors.

The Guest Editors are also expected to write an editorial paper for the special issue. As a reward for work, all guest editors and reviewers working on a special issue will obtain a 25% discount of the standard publication fee for any of their papers published in any of RonPub journals for one year.

Reviewers’ Responsiblity

A reviewer is mainly responsible for reviewing of manuscripts, writing reviewing report and suggesting acception or deny of manuscripts. Reviews are encouraged to provide input about the quality and management of the journal, and help promote the journal among their peers and at conferences.  

Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member. 

As a reward for the reviewing work, a reviewer will obtain a 25% discount of the standard publication fee for her/his papers (the review is one of authors) published in any of RonPub journals.

Launching New Journals

RonPub always welcomes suggestions for new open access journals in any research area. We are also open for publishing collaborations with research societies. Please send your proposals for new journals or for publishing collaboration to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Publication Criteria

This part provides important information for both the scientific committees and authors.

Ethic Requirement:

For scientific committees: Each editor and reviewer should conduct the evaluation of manuscripts objectively and fairly.
For authors: Authors should present their work honestly without fabrication, falsification, plagiarism or inappropriate data manipulation.

Pre-Check:

In order to filter fabricated submissions, the editorial office will check the authenticity of the authors and their affiliations before a peer-review begins. It is important that the authors communicate with us using the email addresses of their affiliations and provide us the URL addresses of their affiliations. To verify the originality of submissions, we use various plagiarism detection tools to check the content of manuscripts submitted to our journal against existing publications. The overall quality of paper will be also checked including format, figures, tables, integrity and adequacy. Authors may be required to improve the quality of their paper before sending it out for review. If a paper is obviously of low quality, the paper will be directly rejected.

Acceptance Criteria:

The criteria for acceptance of manuscripts are the quality of work. This will concretely be reflected in the following aspects:

  • Novelty and Practical Impact
  • Technical Soundness
  • Appropriateness and Adequacy of 
    • Literature Review
    • Background Discussion
    • Analysis of Issues
  • Presentation, including 
    • Overall Organization 
    • English 
    • Readability

For a contribution to be acceptable for publication, these points should be at least in middle level.

Guidelines for Rejection:

  • If the work described in the manuscript has been published, or is under consideration for publication anywhere else, it will not be evaluated.
  • If the work is a plagiarism, or contains data falsification or fabrication, it will be rejected.
  • Manuscripts, which have seriously technical flaws, will not be accepted.

Call for Journals

Research Online Publishing (RonPub, www.ronpub.com) is a publisher of online, open access and peer-reviewed scientific journals.  For more information about RonPub please visit this link.

RonPub always welcomes suggestions for new journals in any research area. Please send your proposals for journals along with your Curriculum Vitae to This email address is being protected from spambots. You need JavaScript enabled to view it. .

We are also open for publishing collaborations with research societies. Please send your publishing collaboration also to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Be an Editor / Be a Reviewer

RonPub always welcomes qualified academicians and practitioners to join as editors and reviewers. Being an editor/a reviewer is a matter of prestige and personnel achievement. Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member.

If you would like to participate as a scientific committee member of any of RonPub journals, please send an email to This email address is being protected from spambots. You need JavaScript enabled to view it. with your curriculum vitae. We will revert back as soon as possible. For more information about editors/reviewers, please visit this link.

Contact RonPub

Location

RonPub UG (haftungsbeschränkt)
Hiddenseering 30
23560 Lübeck
Germany

Comments and Questions

For general inquiries, please e-mail to This email address is being protected from spambots. You need JavaScript enabled to view it. .

For specific questions on a certain journal, please visit the corresponding journal page to see the email address.