RonPub

Loading...

RonPub Banner

RonPub -- Research Online Publishing

RonPub (Research online Publishing) is an academic publisher of online, open access, peer-reviewed journals.  RonPub aims to provide a platform for researchers, developers, educators, and technical managers to share and exchange their research results worldwide.

RonPub Is Open Access:

RonPub publishes all of its journals under the open access model, defined under BudapestBerlin, and Bethesda open access declarations:

  • All articles published by RonPub is fully open access and online available to readers free of charge.  
  • All open access articles are distributed under  Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction free of charge in any medium, provided that the original work is properly cited. 
  • Authors retain all copyright to their work.
  • Authors may also publish the publisher's version of their paper on any repository or website. 

RonPub Is Cost-Effective:

To be able to provide open access journals, RonPub defray publishing cost by charging a one-time publication fee for each accepted article. One of RonPub objectives is providing a fast and high-quality but lower-cost publishing service. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors who do not have funds to cover publication fees. We also offer a partial fee waiver for editors and reviewers of RonPub as as reward for their work. See the respective Journal webpage for the concrete publication fee.

RonPub Publication Criteria

What we are most concerned about is the quality, not quantity, of publications. We only publish high-quality scholarly papers. Publication Criteria describes the criteria that should be met for a contribution to be acceptable for publication in RonPub journals.

RonPub Publication Ethics Statement:

In order to ensure the publishing quality and the reputation of the publisher, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

RonPub follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Long-Term Preservation in the German National Library

Our publications are archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete.

Where is RonPub?

RonPub is a registered corporation in Lübeck, Germany. Lübeck is a beautiful coastal city, owing wonderful sea resorts and sandy beaches as well as good restaurants. It is located in northern Germany and is 60 kilometer away from Hamburg.

OJBD Cover
Open Journal of Big Data (OJBD)
OJBD, an open access and peer-reviewed online journal, publishes original and creative research results on Big Data. OJBD distributes its articles under the open access model. All articles of OJBD are fully open access and online available to readers free of charge. There is no restriction on the length of the papers. Accepted manuscripts are published online immediately.
Publisher: RonPub UG (haftungsbeschränkt), Lübeck, Germany
Contact: OJBD Editorial Office
ISSN: 2365-029X
Call for Papers: txtUTF-8 txtASCII pdf
OJBD Cover
Open Journal of Big Data (OJBD)
OJBD, an open access and peer-reviewed online journal, publishes original and creative research results on Big Data. OJBD distributes its articles under the open access model. All articles of OJBD are fully open access and online available to readers free of charge. There is no restriction on the length of the papers. Accepted manuscripts are published online immediately.
Publisher: RonPub UG (haftungsbeschränkt), Lübeck, Germany
Contact: OJBD Editorial Office
ISSN: 2365-029X
Call for Papers: txtUTF-8 txtASCII pdf

Aims & Scope

Big Data research is expected to be the hottest topic for the next five years. We shall have solid plans and regular meetings to ensure that our journal attracts the best papers from reputable researchers to support our mission continuously. Our objectives are as follows:

  • Disseminate the emerging techniques, technologies and services associated with Big Data.
  • Offer empirical evidence and approaches to demonstrate contributions made by Big Data.
  • Offer recommendations to research and enterprise communities that use Big Data as a solution for their work.
  • Offer guidelines and strategic directions in the way that Big Data research should progress.

We will seek recommendations and practices that can be successfully delivered to other disciplines such as healthcare, finance, education and science, providing us quality papers centered on Big Data and whose lessons learned will be transferable across disciplines to encourage inter-disciplinary research and funding activities essential for progressive research and development. We will cover extensive studies to ensure that the research and enterprise communities can take our recommendations, guidelines and best practices, which will make real positive impacts to their services and projects. We will ensure that key lessons taken from our journal can be very useful to communities. By blending workshops and calls for papers in our journal, we will ensure that our articles are of the highest caliber and can demonstrate added values and benefits to the people adopting our recommendations. We will ensure all submitters understand and use our recommendations, so that their citations and adoptions of our key lessons will keep our quality high.

Our journal has an advantage over the competing journals in Big Data as follows. First, steps involved in Big Data development should be reproducible to allow organizations to follow. Some articles in competing journals are very theoretical, making reproduction difficult. Second, all demonstrated deliveries in our journal should be easy to use, and provide real added value to technology-adopting organizations beyond just technical implementations. Unlike some articles in competing journals, whose deliveries are hard to understand and don’t consider technical or organizational adoption. We also encourage industrial partners to provide their latest developments, success stories (empirical) and best practices (quantitative and qualitative) to ensure our journal articles have the edge over others.

The Open Journal of Big Data (OJBD) welcomes high-quality and scholarly papers, which include new methodologies, processes, case studies, proofs-of-concept, scientific demonstrations, industrial applications and adoption. The journal covers a wide range of topics including Big Data science, frameworks, analytics, visualizations, recommendations and data-intensive research. The OJBD presents the current challenges faced by Big Data adoption and implementation, and recommends ways, techniques, services and technologies that can resolve existing challenges and improve on the current practices. We focus on how Big Data can make huge positive impacts to different disciplines in addition to IT, which include healthcare, finance, education, physical science, biological science, earth science, business & management, information systems, social sciences and law. There are eight major topics as follows:

  • Techniques, algorithms and innovative methods of processing Big Data (or Big datasets) that achieve performance, accuracy and low-costs.
  • Design, implementation, evaluation and services related to Big Data, including the development process, use cases, experiments and associated simulations.
  • Systems and applications developed by Big Data and descriptions of how Big Data can be used in disciplines such as bioinformatics, finance, education, natural science, weather science, life science, physics, astronomy, law and social science.
  • Security, privacy, trust, data ownership, legal challenges, business models, information systems, social implications, social network analyses and social science related to Big Data.
  • Consolidation of existing technologies (databases, web, mobile, HPC) and how to integrate them in Big Data such as SOA Big Data, data mining, machine learning, HPC Big Data and cloud storage.
  • Recommendations, emerging technologies and techniques associated with Big Data such as mobile Big Data, standards, multi-clouds and internet of things.
  • Data analysis, analytics and visualization, including GPU techniques, new algorithms and methods showing how to achieve significant improvements from existing methods.
  • Surveys, case studies, frameworks and user evaluations involved with qualitative, quantitative and/or computational research methods.

Author Guidelines

Publication Criteria

Publication Criteria provides important information for authors to prepare their manuscripts with a high possibility of being accepted.

Manuscript Preparation

Please prepare your manuscripts using the manuscript template of the journal. It is available for download as word doc docx and latex version zip. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts following the information on the submit pageAuthors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as a PDF file and a word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts. Subsequent enquiries concerning paper progress should be sent to the email address of the journal.

Review Procedure

OJBD is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in OJBD are strictly and thoroughly peer-reviewed. When a manuscript is submitted, the editor-in-chief assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A second round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

Publication Ethics Statement

In order to ensure the publishing quality and the reputation of the journal, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

Our journal follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Articles of OJBD

Archive
Hide Archive Menu
Search Articles in OJBD

 Open Access 

Translation of Array-based Loop Programs to Optimized SQL-based Distributed Programs

Md Hasanuzzaman Noor, Leonidas Fegaras, Tanvir Ahmed Khan, Tanzima Sultana

Open Journal of Big Data (OJBD), 6(1), Pages 1-25, 2022, Downloads: 1843

Full-Text: pdf | URN: urn:nbn:de:101:1-2022041308230855567393 | GNL-LP: 1255285087 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Many data analysis programs are often expressed in terms of array operations in sequential loops. However, these programs do not scale very well to large amounts of data that cannot fit in the memory of a single computer and they have to be rewritten to work on Big Data analysis platforms, such as Map-Reduce and Spark. We present a novel framework, called SQLgen, that automatically translates sequential loops on arrays to distributed data-parallel programs, specifically Spark SQL programs. We further extend this framework by introducing OSQLgen, which automatically parallelizes array-based loop programs to distributed data-parallel programs on block arrays. At first, our framework translates the sequential loops on arrays to monoid comprehensions and then to Spark SQL. For SQLgen, the SQL is over coordinate arrays while for OSQLgen, it is over block arrays. As block arrays are more compact than coordinate arrays, computations on block matrices are significantly faster than on arrays in the coordinate format. Since not all array-based loops can be translated to SQL on block arrays, we focus on certain patterns of loops that match an algebraic structure known as a semiring. Many linear algebra operations, such as matrix multiplication required in many machine learning algorithms, as well as many graph programs that are equivalent to a semiring can be translated to distributed data-parallel programs on block arrays using OSQLgen, thus giving us a substantial performance gain. Finally, to evaluate our framework, we compare the performance of OSQLgen with GraphX, GraphFrames, MLlib, and hand-written Spark SQL programs on coordinate and block arrays on various real-world problems.

BibTex:

    @Article{OJBD_2022v6i1n01_Noor,
        title     = {Translation of Array-based Loop Programs to Optimized SQL-based Distributed Programs},
        author    = {Md Hasanuzzaman Noor and
                     Leonidas Fegaras and
                     Tanvir Ahmed Khan and
                     Tanzima Sultana},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2022},
        volume    = {6},
        number    = {1},
        pages     = {1--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022041308230855567393},
        urn       = {urn:nbn:de:101:1-2022041308230855567393},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Many data analysis programs are often expressed in terms of array operations in sequential loops. However, these programs do not scale very well to large amounts of data that cannot fit in the memory of a single computer and they have to be rewritten to work on Big Data analysis platforms, such as Map-Reduce and Spark. We present a novel framework, called SQLgen, that automatically translates sequential loops on arrays to distributed data-parallel programs, specifically Spark SQL programs. We further extend this framework by introducing OSQLgen, which automatically parallelizes array-based loop programs to distributed data-parallel programs on block arrays. At first, our framework translates the sequential loops on arrays to monoid comprehensions and then to Spark SQL. For SQLgen, the SQL is over coordinate arrays while for OSQLgen, it is over block arrays. As block arrays are more compact than coordinate arrays, computations on block matrices are significantly faster than on arrays in the coordinate format. Since not all array-based loops can be translated to SQL on block arrays, we focus on certain patterns of loops that match an algebraic structure known as a semiring. Many linear algebra operations, such as matrix multiplication required in many machine learning algorithms, as well as many graph programs that are equivalent to a semiring can be translated to distributed data-parallel programs on block arrays using OSQLgen, thus giving us a substantial performance gain. Finally, to evaluate our framework, we compare the performance of OSQLgen with GraphX, GraphFrames, MLlib, and hand-written Spark SQL programs on coordinate and block arrays on various real-world problems.}
    }

 Open Access 

A SIEM Architecture for Advanced Anomaly Detection

Tim Laue, Timo Klecker, Carsten Kleiner, Kai-Oliver Detken

Open Journal of Big Data (OJBD), 6(1), Pages 26-42, 2022, Downloads: 21878

Full-Text: pdf | URN: urn:nbn:de:101:1-2022070319330522943055 | GNL-LP: 1261725549 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.

BibTex:

    @Article{OJBD_2022v6i1n02_Laue,
        title     = {A SIEM Architecture for Advanced Anomaly Detection},
        author    = {Tim Laue and
                     Timo Klecker and
                     Carsten Kleiner and
                     Kai-Oliver Detken},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2022},
        volume    = {6},
        number    = {1},
        pages     = {26--42},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2022070319330522943055},
        urn       = {urn:nbn:de:101:1-2022070319330522943055},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Dramatic increases in the number of cyber security attacks and breaches toward businesses and organizations have been experienced in recent years. The negative impacts of these breaches not only cause the stealing and compromising of sensitive information, malfunctioning of network devices, disruption of everyday operations, financial damage to the attacked business or organization itself, but also may navigate to peer businesses/organizations in the same industry. Therefore, prevention and early detection of these attacks play a significant role in the continuity of operations in IT-dependent organizations. At the same time detection of various types of attacks has become extremely difficult as attacks get more sophisticated, distributed and enabled by Artificial Intelligence (AI). Detection and handling of these attacks require sophisticated intrusion detection systems which run on powerful hardware and are administered by highly experienced security staff. Yet, these resources are costly to employ, especially for small and medium-sized enterprises (SMEs). To address these issues, we developed an architecture -within the GLACIER project- that can be realized as an in-house operated Security Information Event Management (SIEM) system for SMEs. It is affordable for SMEs as it is solely based on free and open-source components and thus does not require any licensing fees. Moreover, it is a Self-Contained System (SCS) and does not require too much management effort. It requires short configuration and learning phases after which it can be self-contained as long as the monitored infrastructure is stable (apart from a reaction to the generated alerts which may be outsourced to a service provider in SMEs, if necessary). Another main benefit of this system is to supply data to advanced detection algorithms, such as multidimensional analysis algorithms, in addition to traditional SIEMspecific tasks like data collection, normalization, enrichment, and storage. It supports the application of novel methods to detect security-related anomalies. The most distinct feature of this system that differentiates it from similar solutions in the market is its user feedback capability. Detected anomalies are displayed in a Graphical User Interface (GUI) to the security staff who are allowed to give feedback for anomalies. Subsequently, this feedback is utilized to fine-tune the anomaly detection algorithm. In addition, this GUI also provides access to network actors for quick incident responses. The system in general is suitable for both Information Technology (IT) and Operational Technology (OT) environments, while the detection algorithm must be specifically trained for each of these environments individually.}
    }

 Open Access 

Sparse and Dense Linear Algebra for Machine Learning on Parallel-RDBMS Using SQL

Dennis Marten, Holger Meyer, Daniel Dietrich, Andreas Heuer

Open Journal of Big Data (OJBD), 5(1), Pages 1-34, 2019, Downloads: 5421, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-2018122318341069172957 | GNL-LP: 1174122773 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: While computational modelling gets more complex and more accurate, its calculation costs have been increasing alike. However, working on big data environments usually involves several steps of massive unfiltered data transmission. In this paper, we continue our work on the PArADISE framework, which enables privacy aware distributed computation of big data scenarios, and present a study on how linear algebra operations can be calculated over parallel relational database systems using SQL. We investigate the ways to improve the computation performance of algebra operations over relational databases and show how using database techniques impacts the computation performance like the use of indexes, choice of schema, query formulation and others. We study the dense and sparse problems of linear algebra over relational databases and show that especially sparse problems can be efficiently computed using SQL. Furthermore, we present a simple but universal technique to improve intra-operator parallelism for linear algebra operations in order to support the parallel computation of big data.

BibTex:

    @Article{OJBD_2019v5i1n01_Marten,
        title     = {Sparse and Dense Linear Algebra for Machine Learning on Parallel-RDBMS Using SQL},
        author    = {Dennis Marten and
                     Holger Meyer and
                     Daniel Dietrich and
                     Andreas Heuer},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2019},
        volume    = {5},
        number    = {1},
        pages     = {1--34},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2018122318341069172957},
        urn       = {urn:nbn:de:101:1-2018122318341069172957},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {While computational modelling gets more complex and more accurate, its calculation costs have been increasing alike. However, working on big data environments usually involves several steps of massive unfiltered data transmission. In this paper, we continue our work on the PArADISE framework, which enables privacy aware distributed computation of big data scenarios, and present a study on how linear algebra operations can be calculated over parallel relational database systems using SQL. We investigate the ways to improve the computation performance of algebra operations over relational databases and show how using database techniques impacts the computation performance like the use of indexes, choice of schema, query formulation and others. We study the dense and sparse problems of linear algebra over relational databases and show that especially sparse problems can be efficiently computed using SQL. Furthermore, we present a simple but universal technique to improve intra-operator parallelism for linear algebra operations in order to support the parallel computation of big data.}
    }

 Open Access 

Compile-Time Query Optimization for Big Data Analytics

Leonidas Fegaras

Open Journal of Big Data (OJBD), 5(1), Pages 35-61, 2019, Downloads: 5021, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-2019041419330955160405 | GNL-LP: 1183558627 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Many emerging programming environments for large-scale data analysis, such as Map-Reduce, Spark, and Flink, provide Scala-based APIs that consist of powerful higher-order operations that ease the development of complex data analysis applications. However, despite the simplicity of these APIs, many programmers prefer to use declarative languages, such as Hive and Spark SQL, to code their distributed applications. Unfortunately, most current data analysis query languages are based on the relational model and cannot effectively capture the rich data types and computations required for complex data analysis applications. Furthermore, these query languages are not well-integrated with the host programming language, as they are based on an incompatible data model. To address these shortcomings, we introduce a new query language for data-intensive scalable computing that is deeply embedded in Scala, called DIQL, and a query optimization framework that optimizes and translates DIQL queries to byte code at compile-time. In contrast to other query languages, our query embedding eliminates impedance mismatch as any Scala code can be seamlessly mixed with SQL-like syntax, without having to add any special declaration. DIQL supports nested collections and hierarchical data and allows query nesting at any place in a query. With DIQL, programmers can express complex data analysis tasks, such as PageRank and matrix factorization, using SQL-like syntax exclusively. The DIQL query optimizer uses algebraic transformations to derive all possible joins in a query, including those hidden across deeply nested queries, thus unnesting nested queries of any form and any number of nesting levels. The optimizer also uses general transformations to push down predicates before joins and to prune unneeded data across operations. DIQL has been implemented on three Big Data platforms, Apache Spark, Apache Flink, and Twitter's Cascading/Scalding, and has been shown to have competitive performance relative to Spark DataFrames and Spark SQL for some complex queries. This paper extends our previous work on embedded data-intensive query languages by describing the complete details of the formal framework and the query translation and optimization processes, and by providing more experimental results that give further evidence of the performance of our system.

BibTex:

    @Article{OJBD_2019v5i1n02_Fegaras,
        title     = {Compile-Time Query Optimization for Big Data Analytics},
        author    = {Leonidas Fegaras},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2019},
        volume    = {5},
        number    = {1},
        pages     = {35--61},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2019041419330955160405},
        urn       = {urn:nbn:de:101:1-2019041419330955160405},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Many emerging programming environments for large-scale data analysis, such as Map-Reduce, Spark, and Flink, provide Scala-based APIs that consist of powerful higher-order operations that ease the development of complex data analysis applications. However, despite the simplicity of these APIs, many programmers prefer to use declarative languages, such as Hive and Spark SQL, to code their distributed applications. Unfortunately, most current data analysis query languages are based on the relational model and cannot effectively capture the rich data types and computations required for complex data analysis applications. Furthermore, these query languages are not well-integrated with the host programming language, as they are based on an incompatible data model. To address these shortcomings, we introduce a new query language for data-intensive scalable computing that is deeply embedded in Scala, called DIQL, and a query optimization framework that optimizes and translates DIQL queries to byte code at compile-time. In contrast to other query languages, our query embedding eliminates impedance mismatch as any Scala code can be seamlessly mixed with SQL-like syntax, without having to add any special  declaration. DIQL supports nested collections and hierarchical data and allows query nesting at any place in a query. With DIQL, programmers can express complex data analysis tasks, such as PageRank and matrix factorization, using SQL-like syntax exclusively. The DIQL query optimizer uses algebraic transformations to derive all possible joins in a query, including those hidden across deeply nested queries, thus unnesting nested queries of any form and any number of nesting levels. The optimizer also uses general transformations to push down predicates before joins and to prune unneeded data across operations. DIQL has been implemented on three Big Data platforms, Apache Spark, Apache Flink, and Twitter's Cascading/Scalding, and has been shown to have competitive performance relative to Spark DataFrames and Spark SQL for some complex queries. This paper extends our previous work on embedded data-intensive query languages by describing the complete details of the formal framework and the query translation and optimization processes, and by providing more experimental results that give further evidence of the performance of our system.}
    }

 Open Access 

Modelling Patterns in Continuous Streams of Data

Ricardo Jesus, Mario Antunes, Diogo Gomes, Rui L. Aguiar

Open Journal of Big Data (OJBD), 4(1), Pages 1-13, 2018, Downloads: 4322, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201801234777 | GNL-LP: 1151148423 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The untapped source of information, extracted from the increasing number of sensors, can be explored to improve and optimize several systems. Yet, hand in hand with this growth goes the increasing difficulty to manage and organize all this new information. The lack of a standard context representation scheme is one of the main struggles in this research area. Conventional methods for extracting knowledge from data rely on a standard representation or a priori relation, which may not be feasible for IoT and M2M scenarios. With this in mind we propose a stream characterization model in order to provide the foundations for a novel stream similarity metric. Complementing previous work on context organization, we aim to provide an automatic stream organizational model without enforcing specific representations. In this paper we extend our work on stream characterization and devise a novel similarity method.

BibTex:

    @Article{OJBD_2018v4i1n01_Jesus,
        title     = {Modelling Patterns in Continuous Streams of Data},
        author    = {Ricardo Jesus and
                     Mario Antunes and
                     Diogo Gomes and
                     Rui L. Aguiar},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2018},
        volume    = {4},
        number    = {1},
        pages     = {1--13},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201801234777},
        urn       = {urn:nbn:de:101:1-201801234777},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The untapped source of information, extracted from the increasing number of sensors, can be explored to improve and optimize several systems. Yet, hand in hand with this growth goes the increasing difficulty to manage and organize all this new information. The lack of a standard context representation scheme is one of the main struggles in this research area. Conventional methods for extracting knowledge from data rely on a standard representation or a priori relation, which may not be feasible for IoT and M2M scenarios. With this in mind we propose a stream characterization model in order to provide the foundations for a novel stream similarity metric. Complementing previous work on context organization, we aim to provide an automatic stream organizational model without enforcing specific representations. In this paper we extend our work on stream characterization and devise a novel similarity method.}
    }

 Open Access 

Operation of Modular Smart Grid Applications Interacting through a Distributed Middleware

Stephan Cejka, Albin Frischenschlager, Mario Faschang, Mark Stefan, Konrad Diwold

Open Journal of Big Data (OJBD), 4(1), Pages 14-29, 2018, Downloads: 5068, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201801212419 | GNL-LP: 1151046426 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: IoT-functionality can broaden the scope of distribution system automation in terms of functionality and communication. However, it also poses risks regarding resource consumption and security. This article presents a field approved IoT-enabled smart grid middleware, which allows for flexible deployment and management of applications within smart grid operation. In the first part of the work, the resource consumption of the middleware is analyzed and current memory bottlenecks are identified. The bottlenecks can be resolved by introducing a new entity that allows to dynamically load multiple applications within one JVM. The performance was experimentally tested and the results suggest that its application can significantly reduce the applications' memory footprint on the physical device. The second part of the study identifies and discusses potential security threats, with a focus on attacks stemming from malicious software applications within the framework. In order to prevent such attacks a proxy based prevention mechanism is developed and demonstrated.

BibTex:

    @Article{OJBD_2018v4i1n02_Cejka,
        title     = {Operation of Modular Smart Grid Applications Interacting through a Distributed Middleware},
        author    = {Stephan Cejka and
                     Albin Frischenschlager and
                     Mario Faschang and
                     Mark Stefan and
                     Konrad Diwold},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2018},
        volume    = {4},
        number    = {1},
        pages     = {14--29},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201801212419},
        urn       = {urn:nbn:de:101:1-201801212419},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {IoT-functionality can broaden the scope of distribution system automation in terms of functionality and communication. However, it also poses risks regarding resource consumption and security. This article presents a field approved IoT-enabled smart grid middleware, which allows for flexible deployment and management of applications within smart grid operation. In the first part of the work, the resource consumption of the middleware is analyzed and current memory bottlenecks are identified. The bottlenecks can be resolved by introducing a new entity that allows to dynamically load multiple applications within one JVM. The performance was experimentally tested and the results suggest that its application can significantly reduce the applications' memory footprint on the physical device. The second part of the study identifies and discusses potential security threats, with a focus on attacks stemming from malicious software applications within the framework. In order to prevent such attacks a proxy based prevention mechanism is developed and demonstrated.}
    }

 Open Access 

Cloud-Scale Entity Resolution: Current State and Open Challenges

Xiao Chen, Eike Schallehn, Gunter Saake

Open Journal of Big Data (OJBD), 4(1), Pages 30-51, 2018, Downloads: 5095, Citations: 18

Full-Text: pdf | URN: urn:nbn:de:101:1-201804155766 | GNL-LP: 1156154723 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Entity resolution (ER) is a process to identify records in information systems, which refer to the same real-world entity. Because in the two recent decades the data volume has grown so large, parallel techniques are called upon to satisfy the ER requirements of high performance and scalability. The development of parallel ER has reached a relatively prosperous stage, and has found its way into several applications. In this work, we first comprehensively survey the state of the art of parallel ER approaches. From the comprehensive overview, we then extract the classification criteria of parallel ER, classify and compare these approaches based on these criteria. Finally, we identify open research questions and challenges and discuss potential solutions and further research potentials in this field.

BibTex:

    @Article{OJBD_2018v4i1n03_Chen,
        title     = {Cloud-Scale Entity Resolution: Current State and Open Challenges},
        author    = {Xiao Chen and
                     Eike Schallehn and
                     Gunter Saake},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2018},
        volume    = {4},
        number    = {1},
        pages     = {30--51},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201804155766},
        urn       = {urn:nbn:de:101:1-201804155766},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Entity resolution (ER) is a process to identify records in information systems, which refer to the same real-world entity. Because in the two recent decades the data volume has grown so large, parallel techniques are called upon to satisfy the ER requirements of high performance and scalability. The development of parallel ER has reached a relatively prosperous stage, and has found its way into several applications. In this work, we first comprehensively survey the state of the art of parallel ER approaches. From the comprehensive overview, we then extract the classification criteria of parallel ER, classify and compare these approaches based on these criteria. Finally, we identify open research questions and challenges and discuss potential solutions and further research potentials in this field.}
    }

 Open Access 

Technology Selection for Big Data and Analytical Applications

Denis Lehmann, David Fekete, Gottfried Vossen

Open Journal of Big Data (OJBD), 3(1), Pages 1-25, 2017, Downloads: 4007, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201711266876 | GNL-LP: 1147192790 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The term Big Data has become pervasive in recent years, as smart phones, televisions, washing machines, refrigerators, smart meters, diverse sensors, eyeglasses, and even clothes connect to the Internet. However, their generated data is essentially worthless without appropriate data analytics that utilizes information retrieval, statistics, as well as various other techniques. As Big Data is commonly too big for a single person or institution to investigate, appropriate tools are being used that go way beyond a traditional data warehouse and that have been developed in recent years. Unfortunately, there is no single solution but a large variety of different tools, each of which with distinct functionalities, properties and characteristics. Especially small and medium-sized companies have a hard time to keep track, as this requires time, skills, money, and specific knowledge that, in combination, result in high entrance barriers for Big Data utilization. This paper aims to reduce these barriers by explaining and structuring different classes of technologies and the basic criteria for proper technology selection. It proposes a framework that guides especially small and mid-sized companies through a suitable selection process that can serve as a basis for further advances.

BibTex:

    @Article{OJBD_2017v3n01_Lehmann,
        title     = {Technology Selection for Big Data and Analytical Applications},
        author    = {Denis Lehmann and
                     David Fekete and
                     Gottfried Vossen},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {1--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201711266876},
        urn       = {urn:nbn:de:101:1-201711266876},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The term Big Data has become pervasive in recent years, as smart phones, televisions, washing machines, refrigerators, smart meters, diverse sensors, eyeglasses, and even clothes connect to the Internet. However, their generated data is essentially worthless without appropriate data analytics that utilizes information retrieval, statistics, as well as various other techniques. As Big Data is commonly too big for a single person or institution to investigate, appropriate tools are being used that go way beyond a traditional data warehouse and that have been developed in recent years. Unfortunately, there is no single solution but a large variety of different tools, each of which with distinct functionalities, properties and characteristics. Especially small and medium-sized companies have a hard time to keep track, as this requires time, skills, money, and specific knowledge that, in combination, result in high entrance barriers for Big Data utilization. This paper aims to reduce these barriers by explaining and structuring different classes of technologies and the basic criteria for proper technology selection. It proposes a framework that guides especially small and mid-sized companies through a suitable selection process that can serve as a basis for further advances.}
    }

 Open Access 

Combining Process Guidance and Industrial Feedback for Successfully Deploying Big Data Projects

Christophe Ponsard, Mounir Touzani, Annick Majchrowski

Open Journal of Big Data (OJBD), 3(1), Pages 26-41, 2017, Downloads: 5166, Citations: 7

Full-Text: pdf | URN: urn:nbn:de:101:1-201712245446 | GNL-LP: 1149497165 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Companies are faced with the challenge of handling increasing amounts of digital data to run or improve their business. Although a large set of technical solutions are available to manage such Big Data, many companies lack the maturity to manage that kind of projects, which results in a high failure rate. This paper aims at providing better process guidance for a successful deployment of Big Data projects. Our approach is based on the combination of a set of methodological bricks documented in the literature from early data mining projects to nowadays. It is complemented by learned lessons from pilots conducted in different areas (IT, health, space, food industry) with a focus on two pilots giving a concrete vision of how to drive the implementation with emphasis on the identification of values, the definition of a relevant strategy, the use of an Agile follow-up and a progressive rise in maturity.

BibTex:

    @Article{OJBD_2017v3i1n02_Ponsard,
        title     = {Combining Process Guidance and Industrial Feedback for Successfully Deploying Big Data Projects},
        author    = {Christophe Ponsard and
                     Mounir Touzani and
                     Annick Majchrowski},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {26--41},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201712245446},
        urn       = {urn:nbn:de:101:1-201712245446},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Companies are faced with the challenge of handling increasing amounts of digital data to run or improve their business. Although a large set of technical solutions are available to manage such Big Data, many companies lack the maturity to manage that kind of projects, which results in a high failure rate. This paper aims at providing better process guidance for a successful deployment of Big Data projects. Our approach is based on the combination of a set of methodological bricks documented in the literature from early data mining projects to nowadays. It is complemented by learned lessons from pilots conducted in different areas (IT, health, space, food industry) with a focus on two pilots giving a concrete vision of how to drive the implementation with emphasis on the identification of values, the definition of a relevant strategy, the use of an Agile follow-up and a progressive rise in maturity.}
    }

 Open Access 

Conformance of Social Media as Barometer of Public Engagement

Songchun Moon

Open Journal of Big Data (OJBD), 2(1), Pages 1-10, 2016, Downloads: 5478

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194393 | GNL-LP: 1132360560 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There have been continuously a number of expectations: Social media may play a role of indicator that shows the degree of engagement and preference of choices of users toward music or movies. However, finding appropriate software tools in the market to verify this sort of expectation is too costly and complicated in their natures, and this causes a number of difficulties to attempt technical experimentation. A convenient and easy tool to facilitate such experimentation was developed in this study and was used successfully for performing various measurements with regard to user engagement in music and movies.

BibTex:

    @Article{OJBD_2016v2i101_Moon,
        title     = {Conformance of Social Media as Barometer of Public Engagement},
        author    = {Songchun Moon},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {1--10},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194393},
        urn       = {urn:nbn:de:101:1-201705194393},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There have been continuously a number of expectations: Social media may play a role of indicator that shows the degree of engagement and preference of choices of users toward music or movies. However, finding appropriate software tools in the market to verify this sort of expectation is too costly and complicated in their natures, and this causes a number of difficulties to attempt technical experimentation. A convenient and easy tool to facilitate such experimentation was developed in this study and was used successfully for performing various measurements with regard to user engagement in music and movies.}
    }

 Open Access 

Constructing Large-Scale Semantic Web Indices for the Six RDF Collation Orders

Sven Groppe, Dennis Heinrich, Christopher Blochwitz, Thilo Pionteck

Open Journal of Big Data (OJBD), 2(1), Pages 11-25, 2016, Downloads: 5511, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194418 | GNL-LP: 1132360587 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The Semantic Web community collects masses of valuable and publicly available RDF data in order to drive the success story of the Semantic Web. Efficient processing of these datasets requires their indexing. Semantic Web indices make use of the simple data model of RDF: The basic concept of RDF is the triple, which hence has only 6 different collation orders. On the one hand having 6 collation orders indexed fast merge joins (consuming the sorted input of the indices) can be applied as much as possible during query processing. On the other hand constructing the indices for 6 different collation orders is very time-consuming for large-scale datasets. Hence the focus of this paper is the efficient Semantic Web index construction for large-scale datasets on today's multi-core computers. We complete our discussion with a comprehensive performance evaluation, where our approach efficiently constructs the indices of over 1 billion triples of real world data.

BibTex:

    @Article{OJBD_2016v2i1n02_Groppe,
        title     = {Constructing Large-Scale Semantic Web Indices for the Six RDF Collation Orders},
        author    = {Sven Groppe and
                     Dennis Heinrich and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {11--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194418},
        urn       = {urn:nbn:de:101:1-201705194418},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The Semantic Web community collects masses of valuable and publicly available RDF data in order to drive the success story of the Semantic Web. Efficient processing of these datasets requires their indexing. Semantic Web indices make use of the simple data model of RDF: The basic concept of RDF is the triple, which hence has only 6 different collation orders. On the one hand having 6 collation orders indexed fast merge joins (consuming the sorted input of the indices) can be applied as much as possible during query processing. On the other hand constructing the indices for 6 different collation orders is very time-consuming for large-scale datasets. Hence the focus of this paper is the efficient Semantic Web index construction for large-scale datasets on today's multi-core computers. We complete our discussion with a comprehensive performance evaluation, where our approach efficiently constructs the indices of over 1 billion triples of real world data.}
    }

 Open Access 

New Areas of Contributions and New Addition of Security

Victor Chang

Open Journal of Big Data (OJBD), 2(1), Pages 26-28, 2016, Downloads: 3720

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194405 | GNL-LP: 1132360579 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Open Journal of Big Data (OJBD) (www.ronpub.com/ojbd) is an open access journal, which addresses the aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents two articles published in the first issue of the second volume of OJBD. The first article is about the investigation of social media for the public engagement. The second article looks into large-scale semantic web indices for six RDF collation orders. OJBD has an increasingly improved reputation thanks to the support of research communities. We will set up the Second International Conference on Internet of Things, Big Data and Security (IoTBDS 2017), in Porto, Portugal, between 24 and 26 April 2017. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.

BibTex:

    @Article{OJBD_2016v2i1n03e_Chang,
        title     = {New Areas of Contributions and New Addition of Security},
        author    = {Victor Chang},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {26--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194405},
        urn       = {urn:nbn:de:101:1-201705194405},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Open Journal of Big Data (OJBD) (www.ronpub.com/ojbd) is an open access journal, which addresses the aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents two articles published in the first issue of the second volume of OJBD. The first article is about the investigation of social media for the public engagement. The second article looks into large-scale semantic web indices for six RDF collation orders. OJBD has an increasingly improved reputation thanks to the support of research communities. We will set up the Second International Conference on Internet of Things, Big Data and Security (IoTBDS 2017), in Porto, Portugal, between 24 and 26 April 2017. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.}
    }

 Open Access 

Big Data in the Cloud: A Survey

Pedro Caldeira Neves, Jorge Bernardino

Open Journal of Big Data (OJBD), 1(2), Pages 1-18, 2015, Downloads: 11707, Citations: 14

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194365 | GNL-LP: 1132360528 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.

BibTex:

    @Article{OJBD_2015v1i2n02_Neves,
        title     = {Big Data in the Cloud: A Survey},
        author    = {Pedro Caldeira Neves and
                     Jorge Bernardino},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {1--18},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194365},
        urn       = {urn:nbn:de:101:1-201705194365},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.}
    }

 Open Access 

Statistical Machine Learning in Brain State Classification using EEG Data

Yuezhe Li, Yuchou Chang, Hong Lin

Open Journal of Big Data (OJBD), 1(2), Pages 19-33, 2015, Downloads: 9976, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194354 | GNL-LP: 113236051X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.

BibTex:

    @Article{OJBD_2015v1i2n03_YuehzeLi,
        title     = {Statistical Machine Learning in Brain State Classification using EEG Data},
        author    = {Yuezhe Li and
                     Yuchou Chang and
                     Hong Lin},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {19--33},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194354},
        urn       = {urn:nbn:de:101:1-201705194354},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.}
    }

 Open Access 

Data Transfers in Hadoop: A Comparative Study

Ujjal Marjit, Kumar Sharma, Puspendu Mandal

Open Journal of Big Data (OJBD), 1(2), Pages 34-46, 2015, Downloads: 12380, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194373 | GNL-LP: 1132360536 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.

BibTex:

    @Article{OJBD_2015v1i2n04_UjjalMarjit,
        title     = {Data Transfers in Hadoop: A Comparative Study},
        author    = {Ujjal Marjit and
                     Kumar Sharma and
                     Puspendu Mandal},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {34--46},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194373},
        urn       = {urn:nbn:de:101:1-201705194373},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.}
    }

 Open Access 

Epilogue: Summary and Outlook

Victor Chang, Muthu Ramachandran, Robert John Walters, Gary B. Wills

Open Journal of Big Data (OJBD), 1(2), Pages 47-50, 2015, Downloads: 5834

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194383 | GNL-LP: 1132360544 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Open Journal of Big Data (OJBD) is an open access journal addressing aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents three articles in the second issue. The first paper is on Big Data in the Cloud. The second paper is on Statistical Machine Learning in Brain State Classification using EEG Data. The third article is on Data Transfers in Hadoop. OJBD has a rising reputation thanks to the support of research communities, which has helped us set up the First International Conference on Internet of Things and Big Data (IoTBD 2016), in Rome, Italy, between 23 and 25 April 2016. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.

BibTex:

    @Article{OJBD_2015v1i2n05e_Chang,
        title     = {Epilogue: Summary and Outlook},
        author    = {Victor Chang and
                     Muthu Ramachandran and
                     Robert John Walters and
                     Gary B. Wills},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {47--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194383},
        urn       = {urn:nbn:de:101:1-201705194383},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Open Journal of Big Data (OJBD) is an open access journal addressing aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents three articles in the second issue. The first paper is on Big Data in the Cloud. The second paper is on Statistical Machine Learning in Brain State Classification using EEG Data. The third article is on Data Transfers in Hadoop. OJBD has a rising reputation thanks to the support of research communities, which has helped us set up the First International Conference on Internet of Things and Big Data (IoTBD 2016), in Rome, Italy, between 23 and 25 April 2016. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.}
    }

 Open Access 

Introductory Editorial

Victor Chang, Muthu Ramachandran, Robert John Walters, Gary B. Wills

Open Journal of Big Data (OJBD), 1(1), Pages 1-3, 2015, Downloads: 5488

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194326 | GNL-LP: 1132360471 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The Open Journal of Big Data is a new open access journal published by RonPub, and RonPub is an academic publisher of online, open access, peer-reviewed journals. OJBD addresses aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents the two articles in this first issue. The first paper is on An Efficient Approach for Cost Optimization of the Movement of Big Data, which mainly focuses on the challenge of moving big data from one data center to other.The second paper is on Cognitive Spam Recognition Using Hadoop and Multicast-Update, which describes a method to make machines cognitively label spam using Machine Learning and the Naive Bayesian approach. OJBD has a rising reputation thanks to the support of research communities, which help us set up the First International Conference on Internet of Things and Big Data 2016 (IoTBD 2016), in Rome, Italy, between 23 and 25 April 2016.

BibTex:

    @Article{OJBD_2015v1i1n01_Chang,
        title     = {Introductory Editorial},
        author    = {Victor Chang and
                     Muthu Ramachandran and
                     Robert John Walters and
                     Gary B. Wills},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {1},
        pages     = {1--3},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194326},
        urn       = {urn:nbn:de:101:1-201705194326},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The Open Journal of Big Data is a new open access journal published by RonPub, and RonPub is an academic publisher of online, open access, peer-reviewed journals. OJBD addresses aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents the two articles in this first issue. The first paper is on An Efficient Approach for Cost Optimization of the Movement of Big Data, which mainly focuses on the challenge of moving big data from one data center to other.The second paper is on Cognitive Spam Recognition Using Hadoop and Multicast-Update, which describes a method to make machines cognitively label spam using Machine Learning and the Naive Bayesian approach. OJBD has a rising reputation thanks to the support of research communities, which help us set up the First International Conference on Internet of Things and Big Data 2016 (IoTBD 2016), in Rome, Italy, between 23 and 25 April 2016. }
    }

 Open Access 

An Efficient Approach for Cost Optimization of the Movement of Big Data

Prasad Teli, Manoj V. Thomas, K. Chandrasekaran

Open Journal of Big Data (OJBD), 1(1), Pages 4-15, 2015, Downloads: 10027, Citations: 11

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194335 | GNL-LP: 113236048X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.

BibTex:

    @Article{OJBD_2015v1i1n02_Teli,
        title     = {An Efficient Approach for Cost Optimization of the Movement of Big Data},
        author    = {Prasad Teli and
                     Manoj V. Thomas and
                     K. Chandrasekaran},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {1},
        pages     = {4--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194335},
        urn       = {urn:nbn:de:101:1-201705194335},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.}
    }

 Open Access 

Cognitive Spam Recognition Using Hadoop and Multicast-Update

Mukund YR, Sunil Sandeep Nayak, K. Chandrasekaran

Open Journal of Big Data (OJBD), 1(1), Pages 16-28, 2015, Downloads: 9380, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194340 | GNL-LP: 1132360498 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In today's world of exponentially growing technology, spam is a very common issue faced by users on the internet. Spam not only hinders the performance of a network, but it also wastes space and time, and causes general irritation and presents a multitude of dangers - of viruses, malware, spyware and consequent system failure, identity theft, and other cyber criminal activity. In this context, cognition provides us with a method to help improve the performance of the distributed system. It enables the system to learn what it is supposed to do for different input types as different classifications are made over time and this learning helps it increase its accuracy as time passes. Each system on its own can only do so much learning, because of the limited sample set of inputs that it gets to process. However, in a network, we can make sure that every system knows the different kinds of inputs available and learns what it is supposed to do with a better success rate. Thus, distribution and combination of this cognition across different components of the network leads to an overall improvement in the performance of the system. In this paper, we describe a method to make machines cognitively label spam using Machine Learning and the Naive Bayesian approach. We also present two possible methods of implementation - using a MapReduce Framework (hadoop), and also using messages coupled with a multicast-send based network - with their own subtypes, and the pros and cons of each. We finally present a comparative analysis of the two main methods and provide a basic idea about the usefulness of the two in various different scenarios.

BibTex:

    @Article{OJBD_2015v1i1n03_YR,
        title     = {Cognitive Spam Recognition Using Hadoop and Multicast-Update},
        author    = {Mukund YR and
                     Sunil Sandeep Nayak and
                     K. Chandrasekaran},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {1},
        pages     = {16--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194340},
        urn       = {urn:nbn:de:101:1-201705194340},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In today's world of exponentially growing technology, spam is a very common issue faced by users on the internet. Spam not only hinders the performance of a network, but it also wastes space and time, and causes general irritation and presents a multitude of dangers - of viruses, malware, spyware and consequent system failure, identity theft, and other cyber criminal activity. In this context, cognition provides us with a method to help improve the performance of the distributed system. It enables the system to learn what it is supposed to do for different input types as different classifications are made over time and this learning helps it increase its accuracy as time passes. Each system on its own can only do so much learning, because of the limited sample set of inputs that it gets to process. However, in a network, we can make sure that every system knows the different kinds of inputs available and learns what it is supposed to do with a better success rate. Thus, distribution and combination of this cognition across different components of the network leads to an overall improvement in the performance of the system. In this paper, we describe a method to make machines cognitively label spam using Machine Learning and the Naive Bayesian approach. We also present two possible methods of implementation - using a MapReduce Framework (hadoop), and also using messages coupled with a multicast-send based network - with their own subtypes, and the pros and cons of each. We finally present a comparative analysis of the two main methods and provide a basic idea about the usefulness of the two in various different scenarios.}
    }

OJBD Publication Fees

All articles published by RonPub are fully open access and online available to readers free of charge. To be able to provide open access journals, RonPub defrays the costs (induced by processing and editing of manuscripts, provision and maintenance of infrastructure, and routine operation and management of journals) by charging an one-time publication fee for each accepted article. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors from low-income countries. Authors who do not have funds to cover publication fees should submit an application during the submission process. Applications of waiver will be examined on a case by case basis. The scientific committee members of RonPub are entitled a partial waiver of the standard publication fees as reward for their work. 

  • Standard publication fee: 338 Euro (excluding tax).
  • Authors from the low-income countries: 71% waiver of the standard publication fee. (Note: The list is subject to change based on the data of the World Bank Group.):
    Afghanistan, Bangladesh, Benin, Bhutan, Bolivia (Plurinational State of), Burkina Faso, Burundi, Cambodia, Cameroon, Central African Republic, Chad, Comoros, Congo (Democratic Republic), Côte d'Ivoire, Djibouti, Eritrea, Ethiopia, Gambia, Ghana, Guinea, Guinea-Bissau, Haiti, Honduras, Kenya, Kiribati, Korea (Democratic People’s Republic), Kosovo, Kyrgyz Republic, Lao (People’s Democratic Republic), Lesotho, Liberia, Madagascar, Malawi, Mali, Mauritania, Micronesia (Federated States of), Moldova, Morocco, Mozambique, Myanmar, Nepal, Nicaragua, Niger, Nigeria, Papua New Guinea, Rwanda, Senegal, Sierra Leone, Solomon Islands, Somalia, South Sudan, Sudan, Swaziland, Syrian Arab Republic, São Tomé and Principe, Tajikistan, Tanzania, Timor-Leste, Togo, Uganda, Uzbekistan, Vietnam, West Bank and Gaza Strip, Yemen (Republic), Zambia, Zimbabwe
  • Scientific committee members: 25% waiver of the standard publication fee.
  • Guest editors and reviewers: 25% waiver of the standard publication fee for one year.

Payments are subject to tax. A German VAT (value-added tax) at 19% will be charged if applicable. US and Canadian customers need to provide their sales tax number and their certificate of incorporation to be exempt from the VAT charge; European Union customers (not German customers) need to provide their VAT to be exempt from the VAT charge. Customers from Germany and other countries will be charged with the VAT charge. Individuals are not eligible for tax exempt status.

Editors and reviewers have no access to payment information. The inability to pay will not influence the decision to publish a paper; decisions to publish are only based on the quality of work and the editorial criteria.

OJBD Indexing

In order for our publications getting widely abstracted, indexed and cited, the following methods are employed:

  • Various meta tags are embedded in each publication webpage, including Google Scholar Tags, Dublic Core, EPrints, BE Press and Prism. This enables crawlers of e.g. Google Scholar to discover and index our publications.
  • Different metadata export formats are provided for each article, including BibTex, XML, RSS and RDF. This makes readers to cite our papers easily.
  • An OAI-PMH interface is implemented, which facilitates our article metadata harvesting by indexing services and databases.

The paper Getting Indexed by Bibliographic Databases in the Area of Computer Science provides a comprehensive survey on indexing formats, techniques and databases. We will also continue our efforts on dissemination and indexing of our publications.

OJBD has been indexed by the following libraries and bibliographic databases:

Submission to Open Journal of Big Data (OJBD)

Please submit your manuscript by carefully filling in the information in the following web form. If there technical problems, you may also submit your manuscript by sending the information and the manuscript to .

Submission to Regular or Special Issue

Please specify if the paper is submitted to a regular issue or one of the special issues:

Type of Paper

Please specify the type of your paper here. Please check Aims & Scope if you are not sure of which type your paper is.





Title

Please specify the title of your paper here:

Abstract

Please copy & paste the abstract of your paper here:

Authors

Please provide necessary information about the authors of your submission here. Please mark the contact authors, which will be contacted for the main correspondence.

Author 1:


Name:
EMail:
Affiliation:
Webpage (optional):

Author 2:


Name:
EMail:
Affiliation:
Webpage (optional):

Author 3:


Name:
EMail:
Affiliation:
Webpage (optional):

Add Author

Conflicts of Interest

Please specify any conflicts of interests here. Conflicts of interest occur e.g. if the author and the editor are colleagues, work or worked closely together, or are relatives.

Suggestion of Editors (Optional)

You can suggest editors (with scientific background of the topics addressed in your submission) for handling your submission. The Editor-in-Chief may consider your suggestion, but may also choose another editor.

Suggestion of Reviewers (Optional)

You can suggest reviewers (with scientific background of the topics addressed in your submission) for handling your submission. The editor of your submission may consider your suggestion, but may also choose other or additional reviewers in order to guarantee an independent review process.

Reviewer 1:

Name:
EMail:
Affiliation:
Webpage (optional):

Reviewer 2:

Name:
EMail:
Affiliation:
Webpage (optional):

Reviewer 3:

Name:
EMail:
Affiliation:
Webpage (optional):

Add Reviewer

Paper upload

Please choose your manuscript file for uploading. It should be a pdf file. Please take care that your manuscript is formatted according to the templates provided by RonPub, which are available at our Author Guidelines page. Manuscripts not formatted according to our RonPub templates will be rejected without review!

If you wish that the reviewer are not aware of your name, please submit a blinded manuscript leaving out identifiable information like authors' names and affiliations.

Choose PDF file...

Chosen PDF file: none

Captcha

Please fill in the characters of the image into the text field under the image.

Captcha

Submission

For Authors

Manuscript Preparation

Authors should first read the author guidelines of the corresponding journal. Manuscripts must be prepared using the manuscript template of the respective journal. It is available as word and latex version for download at the Author Guidelines of the corresponding journal page. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts via the submit page of the corresponding journal. Authors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as PDF file and word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts within a few days. Subsequent enquiries concerning paper progress should be made to the corresponding editorial office (see individual journal webpage for concrete contact information).

Review Procedure

RonPub is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in RonPub journals are strictly and thoroughly peer-reviewed. When a manuscript is submitted to a RonPub journal, the editor-in-chief of the journal assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A new round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

For Editors

About RonPub

RonPub is academic publisher of online, open access, peer-reviewed journals. All articles published by RonPub is fully open access and online available to readers free of charge.

RonPub is located in Lübeck, Germany. Lübeck is a beautiful harbour city, 60 kilometer away from Hamburg.

Editor-in-Chief Responsibilities

The Editor-in-Chief of each journal is mainly responsible for the scientific quality of the journal and for assisting in the management of the journal. The Editor-in-Chief suggests topics for the journal, invites distinguished scientists to join the editorial board, oversees the editorial process, and makes the final decision whether a paper can be published after peer-review and revisions.

As a reward for the work of a Editor-in-Chief, the Editor-in-Chief will obtain a 25% discount of the standard publication fee for her/his papers (the Editor-in-Chief is one of authors) published in any of RonPub journals.

Editors’ Responsibilities

Editors assist the Editor-in-Chief in the scientific quality and in decision about topics of the journal. Editors are also encouraged to help to promote the journal among their peers and at conferences. An editor invites at least three reviewers to review a manuscript, but may also review him-/herself the manuscript. After carefully evaluating the review reports and the manuscript itself, the editor makes a commendation about the status of the manuscript. The editor's evaluation as well as the review reports are then sent to EiC, who make the final decision whether a paper can be published after peer-review and revisions. 

The communication with Editorial Board members is done primarily by E-mail, and the Editors are expected to respond within a few working days on any question sent by the Editorial Office so that manuscripts can be processed in a timely fashion. If an editor does not respond or cannot process the work in time, and under some special situations, the editorial office may forward the requests to the Publishers or Editor-in-Chief, who will take the decision directly.

As a reward for the work of editors, an editor will obtain a 25% discount of the standard publication fee for her/his papers (the editor is one of authors) published in any of RonPub journals.

Guest Editors’ Responsibilities

Guest Editors are responsible of the scientific quality of their special issues. Guest Editors will be in charge of inviting papers, of supervising the refereeing process (each paper should be reviewed at least by three reviewers), and of making decisions on the acceptance of manuscripts submitted to their special issue. As regular issues, all accepted papers by (guest) editors will be sent to the EiC of the journal, who will check the quality of the papers, and make the final decsion whether a paper can be published.

Our editorial office will have the right directly asking authors to revise their paper if there are quality issues, e.g. weak quality of writing, and missing information. Authors are required to revise their paper several times if necessary. A paper accepted by it's quest editor may be rejected by the EiC of the journal due to a low quality. However, this occurs only when authors do not really take efforts to revise their paper. A high-quality publication needs the common efforts from the journal, reviewers, editors, editor-in-chief and authors.

The Guest Editors are also expected to write an editorial paper for the special issue. As a reward for work, all guest editors and reviewers working on a special issue will obtain a 25% discount of the standard publication fee for any of their papers published in any of RonPub journals for one year.

Reviewers’ Responsiblity

A reviewer is mainly responsible for reviewing of manuscripts, writing reviewing report and suggesting acception or deny of manuscripts. Reviews are encouraged to provide input about the quality and management of the journal, and help promote the journal among their peers and at conferences.  

Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member. 

As a reward for the reviewing work, a reviewer will obtain a 25% discount of the standard publication fee for her/his papers (the review is one of authors) published in any of RonPub journals.

Launching New Journals

RonPub always welcomes suggestions for new open access journals in any research area. We are also open for publishing collaborations with research societies. Please send your proposals for new journals or for publishing collaboration to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Publication Criteria

This part provides important information for both the scientific committees and authors.

Ethic Requirement:

For scientific committees: Each editor and reviewer should conduct the evaluation of manuscripts objectively and fairly.
For authors: Authors should present their work honestly without fabrication, falsification, plagiarism or inappropriate data manipulation.

Pre-Check:

In order to filter fabricated submissions, the editorial office will check the authenticity of the authors and their affiliations before a peer-review begins. It is important that the authors communicate with us using the email addresses of their affiliations and provide us the URL addresses of their affiliations. To verify the originality of submissions, we use various plagiarism detection tools to check the content of manuscripts submitted to our journal against existing publications. The overall quality of paper will be also checked including format, figures, tables, integrity and adequacy. Authors may be required to improve the quality of their paper before sending it out for review. If a paper is obviously of low quality, the paper will be directly rejected.

Acceptance Criteria:

The criteria for acceptance of manuscripts are the quality of work. This will concretely be reflected in the following aspects:

  • Novelty and Practical Impact
  • Technical Soundness
  • Appropriateness and Adequacy of 
    • Literature Review
    • Background Discussion
    • Analysis of Issues
  • Presentation, including 
    • Overall Organization 
    • English 
    • Readability

For a contribution to be acceptable for publication, these points should be at least in middle level.

Guidelines for Rejection:

  • If the work described in the manuscript has been published, or is under consideration for publication anywhere else, it will not be evaluated.
  • If the work is a plagiarism, or contains data falsification or fabrication, it will be rejected.
  • Manuscripts, which have seriously technical flaws, will not be accepted.

Call for Journals

Research Online Publishing (RonPub, www.ronpub.com) is a publisher of online, open access and peer-reviewed scientific journals.  For more information about RonPub please visit this link.

RonPub always welcomes suggestions for new journals in any research area. Please send your proposals for journals along with your Curriculum Vitae to This email address is being protected from spambots. You need JavaScript enabled to view it. .

We are also open for publishing collaborations with research societies. Please send your publishing collaboration also to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Be an Editor / Be a Reviewer

RonPub always welcomes qualified academicians and practitioners to join as editors and reviewers. Being an editor/a reviewer is a matter of prestige and personnel achievement. Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member.

If you would like to participate as a scientific committee member of any of RonPub journals, please send an email to This email address is being protected from spambots. You need JavaScript enabled to view it. with your curriculum vitae. We will revert back as soon as possible. For more information about editors/reviewers, please visit this link.

Contact RonPub

Location

RonPub UG (haftungsbeschränkt)
Hiddenseering 30
23560 Lübeck
Germany

Comments and Questions

For general inquiries, please e-mail to This email address is being protected from spambots. You need JavaScript enabled to view it. .

For specific questions on a certain journal, please visit the corresponding journal page to see the email address.