RonPub

Loading...

RonPub Banner

RonPub -- Research Online Publishing

RonPub (Research online Publishing) is an academic publisher of online, open access, peer-reviewed journals.  RonPub aims to provide a platform for researchers, developers, educators, and technical managers to share and exchange their research results worldwide.

RonPub Is Open Access:

RonPub publishes all of its journals under the open access model, defined under BudapestBerlin, and Bethesda open access declarations:

  • All articles published by RonPub is fully open access and online available to readers free of charge.  
  • All open access articles are distributed under  Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction free of charge in any medium, provided that the original work is properly cited. 
  • Authors retain all copyright to their work.
  • Authors may also publish the publisher's version of their paper on any repository or website. 

RonPub Is Cost-Effective:

To be able to provide open access journals, RonPub defray publishing cost by charging a one-time publication fee for each accepted article. One of RonPub objectives is providing a fast and high-quality but lower-cost publishing service. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors who do not have funds to cover publication fees. We also offer a partial fee waiver for editors and reviewers of RonPub as as reward for their work. See the respective Journal webpage for the concrete publication fee.

RonPub Publication Criteria

What we are most concerned about is the quality, not quantity, of publications. We only publish high-quality scholarly papers. Publication Criteria describes the criteria that should be met for a contribution to be acceptable for publication in RonPub journals.

RonPub Publication Ethics Statement:

In order to ensure the publishing quality and the reputation of the publisher, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

RonPub follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Long-Term Preservation in the German National Library

Our publications are archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete.

Where is RonPub?

RonPub is a registered corporation in Lübeck, Germany. Lübeck is a beautiful coastal city, owing wonderful sea resorts and sandy beaches as well as good restaurants. It is located in northern Germany and is 60 kilometer away from Hamburg.

For Authors

Manuscript Preparation

Authors should first read the author guidelines of the corresponding journal. Manuscripts must be prepared using the manuscript template of the respective journal. It is available as word and latex version for download at the Author Guidelines of the corresponding journal page. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts via the submit page of the corresponding journal. Authors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as PDF file and word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts within a few days. Subsequent enquiries concerning paper progress should be made to the corresponding editorial office (see individual journal webpage for concrete contact information).

Review Procedure

RonPub is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in RonPub journals are strictly and thoroughly peer-reviewed. When a manuscript is submitted to a RonPub journal, the editor-in-chief of the journal assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A new round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

For Editors

About RonPub

RonPub is academic publisher of online, open access, peer-reviewed journals. All articles published by RonPub is fully open access and online available to readers free of charge.

RonPub is located in Lübeck, Germany. Lübeck is a beautiful harbour city, 60 kilometer away from Hamburg.

Editor-in-Chief Responsibilities

The Editor-in-Chief of each journal is mainly responsible for the scientific quality of the journal and for assisting in the management of the journal. The Editor-in-Chief suggests topics for the journal, invites distinguished scientists to join the editorial board, oversees the editorial process, and makes the final decision whether a paper can be published after peer-review and revisions.

As a reward for the work of a Editor-in-Chief, the Editor-in-Chief will obtain a 25% discount of the standard publication fee for her/his papers (the Editor-in-Chief is one of authors) published in any of RonPub journals.

Editors’ Responsibilities

Editors assist the Editor-in-Chief in the scientific quality and in decision about topics of the journal. Editors are also encouraged to help to promote the journal among their peers and at conferences. An editor invites at least three reviewers to review a manuscript, but may also review him-/herself the manuscript. After carefully evaluating the review reports and the manuscript itself, the editor makes a commendation about the status of the manuscript. The editor's evaluation as well as the review reports are then sent to EiC, who make the final decision whether a paper can be published after peer-review and revisions. 

The communication with Editorial Board members is done primarily by E-mail, and the Editors are expected to respond within a few working days on any question sent by the Editorial Office so that manuscripts can be processed in a timely fashion. If an editor does not respond or cannot process the work in time, and under some special situations, the editorial office may forward the requests to the Publishers or Editor-in-Chief, who will take the decision directly.

As a reward for the work of editors, an editor will obtain a 25% discount of the standard publication fee for her/his papers (the editor is one of authors) published in any of RonPub journals.

Guest Editors’ Responsibilities

Guest Editors are responsible of the scientific quality of their special issues. Guest Editors will be in charge of inviting papers, of supervising the refereeing process (each paper should be reviewed at least by three reviewers), and of making decisions on the acceptance of manuscripts submitted to their special issue. As regular issues, all accepted papers by (guest) editors will be sent to the EiC of the journal, who will check the quality of the papers, and make the final decsion whether a paper can be published.

Our editorial office will have the right directly asking authors to revise their paper if there are quality issues, e.g. weak quality of writing, and missing information. Authors are required to revise their paper several times if necessary. A paper accepted by it's quest editor may be rejected by the EiC of the journal due to a low quality. However, this occurs only when authors do not really take efforts to revise their paper. A high-quality publication needs the common efforts from the journal, reviewers, editors, editor-in-chief and authors.

The Guest Editors are also expected to write an editorial paper for the special issue. As a reward for work, all guest editors and reviewers working on a special issue will obtain a 25% discount of the standard publication fee for any of their papers published in any of RonPub journals for one year.

Reviewers’ Responsiblity

A reviewer is mainly responsible for reviewing of manuscripts, writing reviewing report and suggesting acception or deny of manuscripts. Reviews are encouraged to provide input about the quality and management of the journal, and help promote the journal among their peers and at conferences.  

Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member. 

As a reward for the reviewing work, a reviewer will obtain a 25% discount of the standard publication fee for her/his papers (the review is one of authors) published in any of RonPub journals.

Launching New Journals

RonPub always welcomes suggestions for new open access journals in any research area. We are also open for publishing collaborations with research societies. Please send your proposals for new journals or for publishing collaboration to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Publication Criteria

This part provides important information for both the scientific committees and authors.

Ethic Requirement:

For scientific committees: Each editor and reviewer should conduct the evaluation of manuscripts objectively and fairly.
For authors: Authors should present their work honestly without fabrication, falsification, plagiarism or inappropriate data manipulation.

Pre-Check:

In order to filter fabricated submissions, the editorial office will check the authenticity of the authors and their affiliations before a peer-review begins. It is important that the authors communicate with us using the email addresses of their affiliations and provide us the URL addresses of their affiliations. To verify the originality of submissions, we use various plagiarism detection tools to check the content of manuscripts submitted to our journal against existing publications. The overall quality of paper will be also checked including format, figures, tables, integrity and adequacy. Authors may be required to improve the quality of their paper before sending it out for review. If a paper is obviously of low quality, the paper will be directly rejected.

Acceptance Criteria:

The criteria for acceptance of manuscripts are the quality of work. This will concretely be reflected in the following aspects:

  • Novelty and Practical Impact
  • Technical Soundness
  • Appropriateness and Adequacy of 
    • Literature Review
    • Background Discussion
    • Analysis of Issues
  • Presentation, including 
    • Overall Organization 
    • English 
    • Readability

For a contribution to be acceptable for publication, these points should be at least in middle level.

Guidelines for Rejection:

  • If the work described in the manuscript has been published, or is under consideration for publication anywhere else, it will not be evaluated.
  • If the work is a plagiarism, or contains data falsification or fabrication, it will be rejected.
  • Manuscripts, which have seriously technical flaws, will not be accepted.

Call for Journals

Research Online Publishing (RonPub, www.ronpub.com) is a publisher of online, open access and peer-reviewed scientific journals.  For more information about RonPub please visit this link.

RonPub always welcomes suggestions for new journals in any research area. Please send your proposals for journals along with your Curriculum Vitae to This email address is being protected from spambots. You need JavaScript enabled to view it. .

We are also open for publishing collaborations with research societies. Please send your publishing collaboration also to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Be an Editor / Be a Reviewer

RonPub always welcomes qualified academicians and practitioners to join as editors and reviewers. Being an editor/a reviewer is a matter of prestige and personnel achievement. Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member.

If you would like to participate as a scientific committee member of any of RonPub journals, please send an email to This email address is being protected from spambots. You need JavaScript enabled to view it. with your curriculum vitae. We will revert back as soon as possible. For more information about editors/reviewers, please visit this link.

Contact RonPub

Location

RonPub UG (haftungsbeschränkt)
Hiddenseering 30
23560 Lübeck
Germany

Comments and Questions

For general inquiries, please e-mail to This email address is being protected from spambots. You need JavaScript enabled to view it. .

For specific questions on a certain journal, please visit the corresponding journal page to see the email address.

RonPub's Transparent Impact Factor of the Year 2018: 1.44

There are numerous criticisms on the use of impact factors and debates about the validity of the impact factor as a measure of journal importance [1, 2, 3, 5, 6, 8, 9]. Several national-level institutions like the German Research Foundation [4] and Science and the Technology Select Committee [7] of the United Kingdom urge their funding councils to only evaluate the quality of individual articles, not the reputation of the journal in which they are published. Nevertherless, we are sometimes asked about the impact factors of our journals. Therefore, we provide here the impact factors for readers who are still interested in impact factors. Our impact factors are calculated in the same way as the one of Thomson Reuters, but the impact factors for our journals are not computed by the company Thomson Reuters and they are computed by ourselves and can be validated by anyone, because we present all data for computing the impact factor (to anyone asking neither for registration nor for fees). These data are provided here and each reader can re-compute and check the calculation of these impact factors. Therefore, we call our impact factor Transparent Impact Factor.

For the calculation of the Impact Factor of an year Y we need the number A of articles published in the years Y-1 and Y-2 (excluding editorials). Furthemore, we determine the number of citations B in the year Y, which cite articles of RonPub published in the years Y-1 or Y-2. The (2-Years) Transparent Impact Factor is then determined by B/A.

There are A := 39 articles published in the years 2016 and 2017. These articles received B := 56 citations in scientific contributions published in 2018. These citations are listed below.

Therefore, the (2-Years) Transparent Impact Factor for the year 2018 is B/A = 1.44

References

  1. Björn Brembs, Katherine Button and Marcus Munafò. Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience, 7 (291): 1–12, 2013.
  2. Ewen Callaway. Beat it, impact factor! Publishing elite turns against controversial metric. Nature, 535 (7611): 210–211, 2016.
  3. Masood Fooladi, Hadi Salehi, Melor Md Yunus, Maryam Farhadi, Arezoo Aghaei Chadegani, Hadi Farhadi, Nader Ale Ebrahim. Does Criticisms Overcome the Praises of Journal Impact Factor? Asian Social Science, 9 (5), 2013.
  4. German Research Foundation, "Quality not Quantity" – DFG Adopts Rules to Counter the Flood of Publications in Research, Press Release No. 7, 2010.
  5. Khaled Moustafa. The disaster of the impact factor. Science and Engineering Ethics, 21 (1): 139–142, 2015.
  6. Mike Rossner, Heather Van Epps, Emma Hill. Show me the data. Journal of Cell Biology, 179 (6): 1091–2, 2007.
  7. Science and Technology Committee, Scientific Publications: Free for all? Tenth Report of the Science and Technology Committee of the House of Commons, 2004.
  8. Maarten van Wesel. Evaluation by Citation: Trends in Publication Behavior, Evaluation Criteria, and the Strive for High Impact Publications. Science and Engineering Ethics, 22 (1): 199–225, 2016.
  9. Time to remodel the journal impact factor. Nature, 535 (466), 2016.

Citations

This list of citations may not be complete. Please contact us, if citations are missing. There might be errors in the citation data due to automatic processing.

 Open Access 

A NoSQL-Based Framework for Managing Home Services

Marinette Bouet, Michel Schneider

Open Journal of Information Systems (OJIS), 3(1), Pages 1-28, 2016, Downloads: 11274, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194810 | GNL-LP: 113236115X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Individuals and companies have an increasing need for services by specialized suppliers in their homes or premises. These services can be quite different and can require different amounts of resources. Service suppliers have to specify the activities to be performed, plan those activities, allocate resources, follow up after their completion and must be able to react to any unexpected situation. Various proposals were formulated to model and implement these functions; however, there is no unified approach that can improve the efficiency of software solutions to enable economy of scale. In this paper, we propose a framework that a service supplier can use to manage geo-localized activities. The proposed framework is based on a NoSQL data model and implemented using the MongoDB system. We also discuss the advantages and drawbacks of a NoSQL approach.

BibTex:

    @Article{OJIS_2016v3i1n02_Marinette,
        title     = {A NoSQL-Based Framework for Managing Home Services},
        author    = {Marinette Bouet and
                     Michel Schneider},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194810},
        urn       = {urn:nbn:de:101:1-201705194810},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Individuals and companies have an increasing need for services by specialized suppliers in their homes or premises. These services can be quite different and can require different amounts of resources. Service suppliers have to specify the activities to be performed, plan those activities, allocate resources, follow up after their completion and must be able to react to any unexpected situation. Various proposals were formulated to model and implement these functions; however, there is no unified approach that can improve the efficiency of software solutions to enable economy of scale. In this paper, we propose a framework that a service supplier can use to manage geo-localized activities. The proposed framework is based on a NoSQL data model and implemented using the MongoDB system. We also discuss the advantages and drawbacks of a NoSQL approach.}
    }
0 citation in 2018

 Open Access 

High-Dimensional Spatio-Temporal Indexing

Mathias Menninghaus, Martin Breunig, Elke Pulvermüller

Open Journal of Databases (OJDB), 3(1), Pages 1-20, 2016, Downloads: 10502

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194635 | GNL-LP: 1132360897 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There exist numerous indexing methods which handle either spatio-temporal or high-dimensional data well. However, those indexing methods which handle spatio-temporal data well have certain drawbacks when confronted with high-dimensional data. As the most efficient spatio-temporal indexing methods are based on the R-tree and its variants, they face the well known problems in high-dimensional space. Furthermore, most high-dimensional indexing methods try to reduce the number of dimensions in the data being indexed and compress the information given by all dimensions into few dimensions but are not able to store now - relative data. One of the most efficient high-dimensional indexing methods, the Pyramid Technique, is able to handle high-dimensional point-data only. Nonetheless, we take this technique and extend it such that it is able to handle spatio-temporal data as well. We introduce a technique for querying in this structure with spatio-temporal queries. We compare our technique, the Spatio-Temporal Pyramid Adapter (STPA), to the RST-tree for in-memory and on-disk applications. We show that for high dimensions, the extra query-cost for reducing the dimensionality in the Pyramid Technique is clearly exceeded by the rising query-cost in the RST-tree. Concluding, we address the main drawbacks and advantages of our technique.

BibTex:

    @Article{OJDB_2016v3i1n01_Menninghaus,
        title     = {High-Dimensional Spatio-Temporal Indexing},
        author    = {Mathias Menninghaus and
                     Martin Breunig and
                     Elke Pulverm\~{A}ller},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--20},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194635},
        urn       = {urn:nbn:de:101:1-201705194635},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There exist numerous indexing methods which handle either spatio-temporal or high-dimensional data well. However, those indexing methods which handle spatio-temporal data well have certain drawbacks when confronted with high-dimensional data. As the most efficient spatio-temporal indexing methods are based on the R-tree and its variants, they face the well known problems in high-dimensional space. Furthermore, most high-dimensional indexing methods try to reduce the number of dimensions in the data being indexed and compress the information given by all dimensions into few dimensions but are not able to store now - relative data. One of the most efficient high-dimensional indexing methods, the Pyramid Technique, is able to handle high-dimensional point-data only. Nonetheless, we take this technique and extend it such that it is able to handle spatio-temporal data as well. We introduce a technique for querying in this structure with spatio-temporal queries. We compare our technique, the Spatio-Temporal Pyramid Adapter (STPA), to the RST-tree for in-memory and on-disk applications. We show that for high dimensions, the extra query-cost for reducing the dimensionality in the Pyramid Technique is clearly exceeded by the rising query-cost in the RST-tree. Concluding, we address the main drawbacks and advantages of our technique.}
    }
0 citations in 2018

 Open Access 

Criteria of Successful IT Projects from Management's Perspective

Mark Harwardt

Open Journal of Information Systems (OJIS), 3(1), Pages 29-54, 2016, Downloads: 19574, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194797 | GNL-LP: 1132361133 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.

BibTex:

    @Article{OJIS_2016v3i1n02_Harwardt,
        title     = {Criteria of Successful IT Projects from Management's Perspective},
        author    = {Mark Harwardt},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {29--54},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194797},
        urn       = {urn:nbn:de:101:1-201705194797},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.}
    }
1 citation in 2018:

IT Project Success from the Management Perspective - A Quantitative Evaluation

Mark Harwardt

Open Journal of Information Systems (OJIS), 5(1), Pages 24-52, 2018.

 Open Access 

Definition and Categorization of Dew Computing

Yingwei Wang

Open Journal of Cloud Computing (OJCC), 3(1), Pages 1-7, 2016, Downloads: 13400, Citations: 69

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194546 | GNL-LP: 1132360781 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.

BibTex:

    @Article{OJCC_2016v3i1n02_YingweiWang,
        title     = {Definition and Categorization of Dew Computing},
        author    = {Yingwei Wang},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--7},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194546},
        urn       = {urn:nbn:de:101:1-201705194546},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.}
    }
14 citations in 2018:

Emergent models, frameworks, and hardware technologies for Big data analytics

Sven Groppe

The Journal of Supercomputing, 2018.

Viability of Dew Computing for Multilayered Networks

David Fisher, Stefan Gloutnikov, Yaoyan Xi, Sadib Khan

2018.

A novel approach for Securely Processing Information on Dew Sites in cloud computing Environment

Bhavya Modi, Krunal Suthar, Jayesh Mevada

International Journal of Emerging Technologies and Innovative Research (JETIR), 5(2), 2018.

An Introduction to Dew Computing: Definition, Concept and Implications.

Partha Pratim Ray

IEEE Access, 6, Pages 723-737, 2018.

Overview of Cloudlet, Fog Computing, Edge Computing, and Dew Computing

Yi Pan, Parimala Thulasiraman, Yingwei Wang

In The 3rd International Workshop in Dew Computing (DEWCOM), Toronto, Canada, 2018.

Dewblock: A Blockchain System Based on Dew Computing

Yingwei Wang

In The 3rd International Workshop in Dew Computing (DEWCOM), Toronto, Canada, 2018.

Edge and Dew Computing for Streaming IoT

Marjan Gusev

In The 3rd International Workshop in Dew Computing (DEWCOM), Toronto, Canada, 2018.

Enhancing Usability of Cloud Storage Clients with Dew Computing

Tushar Mane, Himanshu Agrawal, Gurmeet Sigh Gill

In The 3rd International Workshop in Dew Computing (DEWCOM), Toronto, Canada, 2018.

Post-cloud Computing Models: from Cloud to CDEF

Yingwei Wang

In report: dewcomputing.org, 2018.

Lightweight Dew Computing Paradigm to Manage Heterogeneous Wireless Sensor Networks with UAVs

Archana Rajakaruna, Ahsan Manzoor, Pawani Porambage, Madhusanka Liyanage, Mika Ylianttila, Andrei V. Gurtov

CoRR, abs/1811.04283, 2018.

Formal Description of Dew Computing

Marjan Gusev, Yingwei Wang

In The 3rd International Workshop in Dew Computing (DEWCOM), Toronto, Canada, 2018.

Data-Intensive Computing Paradigms for Big Data

Petra Loncar

Annals of DAAAM and Proceedings, 29, Pages 1010-1018, 2018.

Сетевая архитектура цифровой экономики

Наталья Аркадьевна Верзун, Михаил Олегович Колбанев, Александр Владимирович Омельян

2018.

Post-cloud Computing and Its Varieties

Parimala Thulasiraman, Yingwei Wang

2018.

 Open Access 

Runtime Adaptive Hybrid Query Engine based on FPGAs

Stefan Werner, Dennis Heinrich, Sven Groppe, Christopher Blochwitz, Thilo Pionteck

Open Journal of Databases (OJDB), 3(1), Pages 21-41, 2016, Downloads: 13803, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194645 | GNL-LP: 1132360900 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.

BibTex:

    @Article{OJDB_2016v3i1n02_Werner,
        title     = {Runtime Adaptive Hybrid Query Engine based on FPGAs},
        author    = {Stefan Werner and
                     Dennis Heinrich and
                     Sven Groppe and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {21--41},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194645},
        urn       = {urn:nbn:de:101:1-201705194645},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.	}
    }
0 citation in 2018

 Open Access 

Query Processing in a P2P Network of Taxonomy-based Information Sources

Carlo Meghini, Anastasia Analyti

Open Journal of Web Technologies (OJWT), 3(1), Pages 1-25, 2016, Downloads: 6308

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291402 | GNL-LP: 1133021654 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this study we address the problem of answering queries over a peer-to-peer system of taxonomy-based sources. A taxonomy states subsumption relationships between negation-free DNF formulas on terms and negation-free conjunctions of terms. To the end of laying the foundations of our study, we first consider the centralized case, deriving the complexity of the decision problem and of query evaluation. We conclude by presenting an algorithm that is efficient in data complexity and is based on hypergraphs. We then move to the distributed case, and introduce a logical model of a network of taxonomy-based sources. On such network, a distributed version of the centralized algorithm is then presented, based on a message passing paradigm, and its correctness is proved. We finally discuss optimization issues, and relate our work to the literature.

BibTex:

    @Article{OJWT_2016v3i1n02_Meghini,
        title     = {Query Processing in a P2P Network of Taxonomy-based Information Sources},
        author    = {Carlo Meghini and
                     Anastasia Analyti},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291402},
        urn       = {urn:nbn:de:101:1-201705291402},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this study we address the problem of answering queries over a peer-to-peer system of taxonomy-based sources. A taxonomy states subsumption relationships between negation-free DNF formulas on terms and negation-free conjunctions of terms. To the end of laying the foundations of our study, we first consider the centralized case, deriving the complexity of the decision problem and of query evaluation. We conclude by presenting an algorithm that is efficient in data complexity and is based on hypergraphs. We then move to the distributed case, and introduce a logical model of a network of taxonomy-based sources. On such network, a distributed version of the centralized algorithm is then presented, based on a message passing paradigm, and its correctness is proved. We finally discuss optimization issues, and relate our work to the literature.}
    }
0 citations in 2018

 Open Access 

A 24 GHz FM-CW Radar System for Detecting Closed Multiple Targets and Its Applications in Actual Scenes

Kazuhiro Yamaguchi, Mitumasa Saito, Takuya Akiyama, Tomohiro Kobayashi, Naoki Ginoza, Hideaki Matsue

Open Journal of Internet Of Things (OJIOT), 2(1), Pages 1-15, 2016, Downloads: 13684, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201704245003 | GNL-LP: 1130623858 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper develops a 24 GHz band FM-CW radar system to detect closed multiple targets in a small displacement environment, and its performance is analyzed by computer simulation. The FM-CW radar system uses a differential detection method for removing any signals from background objects and uses a tunable FIR filtering in signal processing for detecting multiple targets. The differential detection method enables the correct detection of both the distance and small displacement at the same time for each target at the FM-CW radar according to the received signals. The basic performance of the FM-CW radar system is analyzed by computer simulation, and the distance and small displacement of a single target are measured in field experiments. The computer simulations are carried out for evaluating the proposed detection method with tunable FIR filtering for the FM-CW radar and for analyzing the performance according to the parameters in a closed multiple targets environment. The results of simulation show that our 24 GHz band FM-CW radar with the proposed detection method can effectively detect both the distance and the small displacement for each target in multiple moving targets environments. Moreover, we develop an IoT-based application for monitoring several targets at the same time in actual scenes.

BibTex:

    @Article{OJIOT_2016v2i1n02_Yamaguchi,
        title     = {A 24 GHz FM-CW Radar System for Detecting Closed Multiple Targets and Its Applications in Actual Scenes},
        author    = {Kazuhiro Yamaguchi and
                     Mitumasa Saito and
                     Takuya Akiyama and
                     Tomohiro Kobayashi and
                     Naoki Ginoza and
                     Hideaki Matsue},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {1--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704245003},
        urn       = {urn:nbn:de:101:1-201704245003},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper develops a 24 GHz band FM-CW radar system to detect closed multiple targets in a small displacement environment, and its performance is analyzed by computer simulation. The FM-CW radar system uses a differential detection method for removing any signals from background objects and uses a tunable FIR filtering in signal processing for detecting multiple targets. The differential detection method enables the correct detection of both the distance and small displacement at the same time for each target at the FM-CW radar according to the received signals. The basic performance of the FM-CW radar system is analyzed by computer simulation, and the distance and small displacement of a single target are measured in field experiments. The computer simulations are carried out for evaluating the proposed detection method with tunable FIR filtering for the FM-CW radar and for analyzing the performance according to the parameters in a closed multiple targets environment. The results of simulation show that our 24 GHz band FM-CW radar with the proposed detection method can effectively detect both the distance and the small displacement for each target in multiple moving targets environments. Moreover, we develop an IoT-based application for monitoring several targets at the same time in actual scenes.}
    }
0 citation in 2018

 Open Access 

Hierarchical Multi-Label Classification Using Web Reasoning for Large Datasets

Rafael Peixoto, Thomas Hassan, Christophe Cruz, Aurélie Bertaux, Nuno Silva

Open Journal of Semantic Web (OJSW), 3(1), Pages 1-15, 2016, Downloads: 7702, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194907 | GNL-LP: 113236129X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Extracting valuable data among large volumes of data is one of the main challenges in Big Data. In this paper, a Hierarchical Multi-Label Classification process called Semantic HMC is presented. This process aims to extract valuable data from very large data sources, by automatically learning a label hierarchy and classifying data items.The Semantic HMC process is composed of five scalable steps, namely Indexation, Vectorization, Hierarchization, Resolution and Realization. The first three steps construct automatically a label hierarchy from statistical analysis of data. This paper focuses on the last two steps which perform item classification according to the label hierarchy. The process is implemented as a scalable and distributed application, and deployed on a Big Data platform. A quality evaluation is described, which compares the approach with multi-label classification algorithms from the state of the art dedicated to the same goal. The Semantic HMC approach outperforms state of the art approaches in some areas.

BibTex:

    @Article{OJSW_2016v3i1n01_Peixoto,
        title     = {Hierarchical Multi-Label Classification Using Web Reasoning for Large Datasets},
        author    = {Rafael Peixoto and
                     Thomas Hassan and
                     Christophe Cruz and
                     Aur\~{A}lie Bertaux and
                     Nuno Silva},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194907},
        urn       = {urn:nbn:de:101:1-201705194907},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Extracting valuable data among large volumes of data is one of the main challenges in Big Data. In this paper, a Hierarchical Multi-Label Classification process called Semantic HMC is presented. This process aims to extract valuable data from very large data sources, by automatically learning a label hierarchy and classifying data items.The Semantic HMC process is composed of five scalable steps, namely Indexation, Vectorization, Hierarchization, Resolution and Realization. The first three steps construct automatically a label hierarchy from statistical analysis of data. This paper focuses on the last two steps which perform item classification according to the label hierarchy. The process is implemented as a scalable and distributed application, and deployed on a Big Data platform. A quality evaluation is described, which compares the approach with multi-label classification algorithms from the state of the art dedicated to the same goal. The Semantic HMC approach outperforms state of the art approaches in some areas.}
    }
1 citation in 2018:

SHMC - Semantic Hierarchical Multi-label Classification Approche Big Data et Web Sémantique pour la classification automatique de données web et la recommandation d'articles économiques

Christophe Cruz

In Congrès National de la Recherche des IUT (CNRIUT), Poster, Aix-en-Provence, 2018.

 Open Access 

A Semantic Question Answering Framework for Large Data Sets

Marta Tatu, Mithun Balakrishna, Steven Werner, Tatiana Erekhinskaya, Dan Moldovan

Open Journal of Semantic Web (OJSW), 3(1), Pages 16-31, 2016, Downloads: 13661, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194921 | GNL-LP: 1132361338 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.

BibTex:

    @Article{OJSW_2016v3i1n02_Tatu,
        title     = {A Semantic Question Answering Framework for Large Data Sets},
        author    = {Marta Tatu and
                     Mithun Balakrishna and
                     Steven Werner and
                     Tatiana Erekhinskaya and
                     Dan Moldovan},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {16--31},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194921},
        urn       = {urn:nbn:de:101:1-201705194921},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.}
    }
1 citation in 2018:

A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

Matthias Pfaff, Helmut Krcmar

Enterprise IS, 12(3), Pages 236-258, 2018.

 Open Access 

OnGIS: Semantic Query Broker for Heterogeneous Geospatial Data Sources

Marek Smid, Petr Kremen

Open Journal of Semantic Web (OJSW), 3(1), Pages 32-50, 2016, Downloads: 6216, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194936 | GNL-LP: 1132361346 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Querying geospatial data from multiple heterogeneous sources backed by different management technologies poses an interesting problem in the data integration and in the subsequent result interpretation. This paper proposes broker techniques for answering a user's complex spatial query: finding relevant data sources (from a catalogue of data sources) capable of answering the query, eventually splitting the query and finding relevant data sources for the query parts, when no single source suffices. For the purpose, we describe each source with a set of prototypical queries that are algorithmically arranged into a lattice, which makes searching efficient. The proposed algorithms leverage GeoSPARQL query containment enhanced with OWL 2 QL semantics. A prototype is implemented in a system called OnGIS.

BibTex:

    @Article{OJSW_2016v3i1n03_Smid,
        title     = {OnGIS: Semantic Query Broker for Heterogeneous Geospatial Data Sources},
        author    = {Marek Smid and
                     Petr Kremen},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {32--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194936},
        urn       = {urn:nbn:de:101:1-201705194936},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Querying geospatial data from multiple heterogeneous sources backed by different management technologies poses an interesting problem in the data integration and in the subsequent result interpretation. This paper proposes broker techniques for answering a user's complex spatial query: finding relevant data sources (from a catalogue of data sources) capable of answering the query, eventually splitting the query and finding relevant data sources for the query parts, when no single source suffices. For the purpose, we describe each source with a set of prototypical queries that are algorithmically arranged into a lattice, which makes searching efficient. The proposed algorithms leverage GeoSPARQL query containment enhanced with OWL 2 QL semantics. A prototype is implemented in a system called OnGIS.}
    }
0 citation in 2018

 Open Access 

Conformance of Social Media as Barometer of Public Engagement

Songchun Moon

Open Journal of Big Data (OJBD), 2(1), Pages 1-10, 2016, Downloads: 6346

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194393 | GNL-LP: 1132360560 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There have been continuously a number of expectations: Social media may play a role of indicator that shows the degree of engagement and preference of choices of users toward music or movies. However, finding appropriate software tools in the market to verify this sort of expectation is too costly and complicated in their natures, and this causes a number of difficulties to attempt technical experimentation. A convenient and easy tool to facilitate such experimentation was developed in this study and was used successfully for performing various measurements with regard to user engagement in music and movies.

BibTex:

    @Article{OJBD_2016v2i101_Moon,
        title     = {Conformance of Social Media as Barometer of Public Engagement},
        author    = {Songchun Moon},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {1--10},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194393},
        urn       = {urn:nbn:de:101:1-201705194393},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There have been continuously a number of expectations: Social media may play a role of indicator that shows the degree of engagement and preference of choices of users toward music or movies. However, finding appropriate software tools in the market to verify this sort of expectation is too costly and complicated in their natures, and this causes a number of difficulties to attempt technical experimentation. A convenient and easy tool to facilitate such experimentation was developed in this study and was used successfully for performing various measurements with regard to user engagement in music and movies.}
    }
0 citations in 2018

 Open Access 

XML-based Execution Plan Format (XEP)

Christoph Koch

Open Journal of Databases (OJDB), 3(1), Pages 42-52, 2016, Downloads: 6204

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194654 | GNL-LP: 1132360919 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Execution plan analysis is one of the most common SQL tuning tasks performed by relational database administrators and developers. Currently each database management system (DBMS) provides its own execution plan format, which supports system-specific details for execution plans and contains inherent plan operators. This makes SQL tuning a challenging issue. Firstly, administrators and developers often work with more than one DBMS and thus have to rethink among different plan formats. In addition, the analysis tools of execution plans only support single DBMSs, or they have to implement separate logic to handle each specific plan format of different DBMSs. To address these problems, this paper proposes an XML-based Execution Plan format (XEP), aiming to standardize the representation of execution plans of relational DBMSs. Two approaches are developed for transforming DBMS-specific execution plans into XEP format. They have been successfully evaluated for IBM DB2, Oracle Database and Microsoft SQL.

BibTex:

    @Article{OJDB_2016v3i1n03_Koch,
        title     = {XML-based Execution Plan Format (XEP)},
        author    = {Christoph Koch},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {42--52},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194654},
        urn       = {urn:nbn:de:101:1-201705194654},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Execution plan analysis is one of the most common SQL tuning tasks performed by relational database administrators and developers. Currently each database management system (DBMS) provides its own execution plan format, which supports system-specific details for execution plans and contains inherent plan operators. This makes SQL tuning a challenging issue. Firstly, administrators and developers often work with more than one DBMS and thus have to rethink among different plan formats. In addition, the analysis tools of execution plans only support single DBMSs, or they have to implement separate logic to handle each specific plan format of different DBMSs. To address these problems, this paper proposes an XML-based Execution Plan format (XEP), aiming to standardize the representation of execution plans of relational DBMSs. Two approaches are developed for transforming DBMS-specific execution plans into XEP format. They have been successfully evaluated for IBM DB2, Oracle Database and Microsoft SQL.}
    }
0 citations in 2018

 Open Access 

Doing More with the Dew: A New Approach to Cloud-Dew Architecture

David Edward Fisher, Shuhui Yang

Open Journal of Cloud Computing (OJCC), 3(1), Pages 8-19, 2016, Downloads: 10396, Citations: 9

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194535 | GNL-LP: 1132360773 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: While the popularity of cloud computing is exploding, a new network computing paradigm is just beginning. In this paper, we examine this exciting area of research known as dew computing and propose a new design of cloud-dew architecture. Instead of hosting only one dew server on a user's PC - as adopted in the current dewsite application - our design promotes the hosting of multiple dew servers instead, one for each installed domain. Our design intends to improve upon existing cloud-dew architecture by providing significantly increased freedom in dewsite development, while also automating the chore of managing dewsite content based on the user's interests and browsing habits. Other noteworthy benefits, all at no added cost to dewsite users, are briefly explored as well.

BibTex:

    @Article{OJCC_2016v3i1n02_Fisher,
        title     = {Doing More with the Dew: A New Approach to Cloud-Dew Architecture},
        author    = {David Edward Fisher and
                     Shuhui Yang},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {8--19},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194535},
        urn       = {urn:nbn:de:101:1-201705194535},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {While the popularity of cloud computing is exploding, a new network computing paradigm is just beginning. In this paper, we examine this exciting area of research known as dew computing and propose a new design of cloud-dew architecture. Instead of hosting only one dew server on a user's PC - as adopted in the current dewsite application - our design promotes the hosting of multiple dew servers instead, one for each installed domain. Our design intends to improve upon existing cloud-dew architecture by providing significantly increased freedom in dewsite development, while also automating the chore of managing dewsite content based on the user's interests and browsing habits. Other noteworthy benefits, all at no added cost to dewsite users, are briefly explored as well.}
    }
3 citations in 2018:

Cloud-fog-dew architecture for personalized service-oriented systems

N. Axak, D. Rosinskiy, O. Barkovska, I. Novoseltsev

In 9th International Conference on Dependable Systems, Services and Technologies (DESSERT), Pages 78-82, 2018.

Emergent models, frameworks, and hardware technologies for Big data analytics

Sven Groppe

The Journal of Supercomputing, 2018.

A novel approach for Securely Processing Information on Dew Sites in cloud computing Environment

Bhavya Modi, Krunal Suthar, Jayesh Mevada

International Journal of Emerging Technologies and Innovative Research (JETIR), 5(2), 2018.

 Open Access 

Controlled Components for Internet of Things As-A-Service

Tatiana Aubonnet, Amina Boubendir, Frédéric Lemoine, Nöemie Simoni

Open Journal of Internet Of Things (OJIOT), 2(1), Pages 16-33, 2016, Downloads: 6495, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201704244995 | GNL-LP: 1130623629 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In order to facilitate developers willing to create future Internet of Things (IoT) services incorporating the nonfunctional aspects, we introduce an approach and an environment based on controlled components. Our approach allows developers to design an IoT "as-a-service", to build the service composition and to manage it. This is important, because the IoT allows us to observe and understand the real world in order to have decision-making information to act on reality. It is important to make sure that all these components work according to their mission, i.e. their Quality of Service (QoS) contract. Our environment provides the modeling, generates Architecture Description Language (ADL) formats, and uses them in the implementation phase on an open-source platform.

BibTex:

    @Article{OJIOT-2016v2i1n02_Aubonnet,
        title     = {Controlled Components for Internet of Things As-A-Service},
        author    = {Tatiana Aubonnet and
                     Amina Boubendir and
                     Fr\~{A}d\~{A}ric Lemoine and
                     N\~{A}emie Simoni},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {16--33},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244995},
        urn       = {urn:nbn:de:101:1-201704244995},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In order to facilitate developers willing to create future Internet of Things (IoT) services incorporating the nonfunctional aspects, we introduce an approach and an environment based on controlled components. Our approach allows developers to design an IoT "as-a-service", to build the service composition and to manage it. This is important, because the IoT allows us to observe and understand the real world in order to have decision-making information to act on reality. It is important to make sure that all these components work according to their mission, i.e. their Quality of Service (QoS) contract. Our environment provides the modeling, generates Architecture Description Language (ADL) formats, and uses them in the implementation phase on an open-source platform.}
    }
3 citations in 2018:

Middleware Support for Generic Actuation in the Internet of Mobile Things

Sheriton Valim, Matheus Zeitune, Bruno Olivieri, Markus Endler

Open Journal of Internet Of Things (OJIOT), 4(1), Pages 24-34, 2018. Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2018) in conjunction with the VLDB 2018 Conference in Rio de Janeiro, Brazil.

Flexibility and dynamicity for open network-as-a-service: From VNF and architecture modeling to deployment

Amina Boubendir, Emmanuel Bertin, Noemie Simoni

In Network Operations and Management Symposium (NOMS), Taipei, Taiwan, Pages 1-6, 2018.

Self-controlled Components for Human-machine Interaction Services

Frédéric Lemoine, Noëmie Simoni, Tatiana Aubonnet

In Proceedings of the 29th Conference on L'Interaction Homme-Machine (IHM), Poitiers, France, Pages 233-241, 2018.

 Open Access 

Constructing Large-Scale Semantic Web Indices for the Six RDF Collation Orders

Sven Groppe, Dennis Heinrich, Christopher Blochwitz, Thilo Pionteck

Open Journal of Big Data (OJBD), 2(1), Pages 11-25, 2016, Downloads: 6397, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194418 | GNL-LP: 1132360587 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The Semantic Web community collects masses of valuable and publicly available RDF data in order to drive the success story of the Semantic Web. Efficient processing of these datasets requires their indexing. Semantic Web indices make use of the simple data model of RDF: The basic concept of RDF is the triple, which hence has only 6 different collation orders. On the one hand having 6 collation orders indexed fast merge joins (consuming the sorted input of the indices) can be applied as much as possible during query processing. On the other hand constructing the indices for 6 different collation orders is very time-consuming for large-scale datasets. Hence the focus of this paper is the efficient Semantic Web index construction for large-scale datasets on today's multi-core computers. We complete our discussion with a comprehensive performance evaluation, where our approach efficiently constructs the indices of over 1 billion triples of real world data.

BibTex:

    @Article{OJBD_2016v2i1n02_Groppe,
        title     = {Constructing Large-Scale Semantic Web Indices for the Six RDF Collation Orders},
        author    = {Sven Groppe and
                     Dennis Heinrich and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {11--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194418},
        urn       = {urn:nbn:de:101:1-201705194418},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The Semantic Web community collects masses of valuable and publicly available RDF data in order to drive the success story of the Semantic Web. Efficient processing of these datasets requires their indexing. Semantic Web indices make use of the simple data model of RDF: The basic concept of RDF is the triple, which hence has only 6 different collation orders. On the one hand having 6 collation orders indexed fast merge joins (consuming the sorted input of the indices) can be applied as much as possible during query processing. On the other hand constructing the indices for 6 different collation orders is very time-consuming for large-scale datasets. Hence the focus of this paper is the efficient Semantic Web index construction for large-scale datasets on today's multi-core computers. We complete our discussion with a comprehensive performance evaluation, where our approach efficiently constructs the indices of over 1 billion triples of real world data.}
    }
0 citation in 2018

 Open Access 

New Areas of Contributions and New Addition of Security

Victor Chang

Open Journal of Big Data (OJBD), 2(1), Pages 26-28, 2016, Downloads: 4358

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194405 | GNL-LP: 1132360579 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Open Journal of Big Data (OJBD) (www.ronpub.com/ojbd) is an open access journal, which addresses the aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents two articles published in the first issue of the second volume of OJBD. The first article is about the investigation of social media for the public engagement. The second article looks into large-scale semantic web indices for six RDF collation orders. OJBD has an increasingly improved reputation thanks to the support of research communities. We will set up the Second International Conference on Internet of Things, Big Data and Security (IoTBDS 2017), in Porto, Portugal, between 24 and 26 April 2017. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.

BibTex:

    @Article{OJBD_2016v2i1n03e_Chang,
        title     = {New Areas of Contributions and New Addition of Security},
        author    = {Victor Chang},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {26--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194405},
        urn       = {urn:nbn:de:101:1-201705194405},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Open Journal of Big Data (OJBD) (www.ronpub.com/ojbd) is an open access journal, which addresses the aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents two articles published in the first issue of the second volume of OJBD. The first article is about the investigation of social media for the public engagement. The second article looks into large-scale semantic web indices for six RDF collation orders. OJBD has an increasingly improved reputation thanks to the support of research communities. We will set up the Second International Conference on Internet of Things, Big Data and Security (IoTBDS 2017), in Porto, Portugal, between 24 and 26 April 2017. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.}
    }
0 citations in 2018

 Open Access 

An NVM Aware MariaDB Database System and Associated IO Workload on File Systems

Jan Lindström, Dhananjoy Das, Nick Piggin, Santhosh Konundinya, Torben Mathiasen, Nisha Talagala, Dulcardo Arteaga

Open Journal of Databases (OJDB), 4(1), Pages 1-21, 2017, Downloads: 7194

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194662 | GNL-LP: 1132360927 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: MariaDB is a community-developed fork of the MySQL relational database management system and originally designed and implemented in order to use the traditional spinning disk architecture. With Non-Volatile memory (NVM) technology now in the forefront and main stream for server storage (Data centers), MariaDB addresses the need by adding support for NVM devices and introduces NVM Compression method. NVM Compression is a novel hybrid technique that combines application level compression with flash awareness for optimal performance and storage efficiency. Utilizing new interface primitives exported by Flash Translation Layers (FTLs), we leverage the garbage collection available in flash devices to optimize the capacity management required by compression systems. We implement NVM Compression in the popular MariaDB database and use variants of commonly available POSIX file system interfaces to provide the extended FTL capabilities to the user space application. The experimental results show that the hybrid approach of NVM Compression can improve compression performance by 2-7x, deliver compression performance for flash devices that is within 5% of uncompressed performance, improve storage efficiency by 19% over legacy Row-Compression, reduce data writes by up to 4x when combined with other flash aware techniques such as Atomic Writes, and deliver further advantages in power efficiency and CPU utilization. Various micro benchmark measurement and findings on sparse files call for required improvement in file systems for handling of punch hole operations on files.

BibTex:

    @Article{OJDB_2017v4i1n01_Lindstroem,
        title     = {An NVM Aware MariaDB Database System and Associated IO Workload on File Systems},
        author    = {Jan Lindstr\~{A}m and
                     Dhananjoy Das and
                     Nick Piggin and
                     Santhosh Konundinya and
                     Torben Mathiasen and
                     Nisha Talagala and
                     Dulcardo Arteaga},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {1--21},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194662},
        urn       = {urn:nbn:de:101:1-201705194662},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {MariaDB is a community-developed fork of the MySQL relational database management system and originally designed and implemented in order to use the traditional spinning disk architecture. With Non-Volatile memory (NVM) technology now in the forefront and main stream for server storage (Data centers), MariaDB addresses the need by adding support for NVM devices and introduces NVM Compression method. NVM Compression is a novel hybrid technique that combines application level compression with flash awareness for optimal performance and storage efficiency. Utilizing new interface primitives exported by Flash Translation Layers (FTLs), we leverage the garbage collection available in flash devices to optimize the capacity management required by compression systems. We implement NVM Compression in the popular MariaDB database and use variants of commonly available POSIX file system interfaces to provide the extended FTL capabilities to the user space application. The experimental results show that the hybrid approach of NVM Compression can improve compression performance by 2-7x, deliver compression performance for flash devices that is within 5\% of uncompressed performance, improve storage efficiency by 19\% over legacy Row-Compression, reduce data writes by up to 4x when combined with other flash aware techniques such as Atomic Writes, and deliver further advantages in power efficiency and CPU utilization. Various micro benchmark measurement and findings on sparse files call for required improvement in file systems for handling of punch hole operations on files.}
    }
0 citations in 2018

 Open Access 

Assessing and Improving Domain Knowledge Representation in DBpedia

Ludovic Font, Amal Zouaq, Michel Gagnon

Open Journal of Semantic Web (OJSW), 4(1), Pages 1-19, 2017, Downloads: 7583

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194949 | GNL-LP: 1132361354 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: With the development of knowledge graphs and the billions of triples generated on the Linked Data cloud, it is paramount to ensure the quality of data. In this work, we focus on one of the central hubs of the Linked Data cloud, DBpedia. In particular, we assess the quality of DBpedia for domain knowledge representation. Our results show that DBpedia has still much room for improvement in this regard, especially for the description of concepts and their linkage with the DBpedia ontology. Based on this analysis, we leverage open relation extraction and the information already available on DBpedia to partly correct the issue, by providing novel relations extracted from Wikipedia abstracts and discovering entity types using the dbo:type predicate. Our results show that open relation extraction can indeed help enrich domain knowledge representation in DBpedia.

BibTex:

    @Article{OJSW_2017v4i1n01_Font,
        title     = {Assessing and Improving Domain Knowledge Representation in DBpedia},
        author    = {Ludovic Font and
                     Amal Zouaq and
                     Michel Gagnon},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {1--19},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194949},
        urn       = {urn:nbn:de:101:1-201705194949},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {With the development of knowledge graphs and the billions of triples generated on the Linked Data cloud, it is paramount to ensure the quality of data. In this work, we focus on one of the central hubs of the Linked Data cloud, DBpedia. In particular, we assess the quality of DBpedia for domain knowledge representation. Our results show that DBpedia has still much room for improvement in this regard, especially for the description of concepts and their linkage with the DBpedia ontology. Based on this analysis, we leverage open relation extraction and the information already available on DBpedia to partly correct the issue, by providing novel relations extracted from Wikipedia abstracts and discovering entity types using the dbo:type predicate. Our results show that open relation extraction can indeed help enrich domain knowledge representation in DBpedia.}
    }
0 citations in 2018

 Open Access 

A Classification Framework for Beacon Applications

Gottfried Vossen, Stuart Dillon, Fabian Schomm, Florian Stahl

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 1-11, 2017, Downloads: 5250, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201704245012 | GNL-LP: 1130624145 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Beacons have received considerable attention in recent years, which is partially due to the fact that they serve as a flexible and versatile replacement for RFIDs in many applications. However, beacons are mostly considered from a purely technical perspective. This paper provides a conceptual view on application scenarios for beacons and introduces a novel framework for characterizing these. The framework consists of four dimensions: device movement, action trigger, purpose type, and connectivity requirements. Based on these, three archetypical scenarios are described. Finally, event-condition-action rules and online algorithms are used to formalize the backend of a beacon architecture.

BibTex:

    @Article{OJIOT_2017v3i1n01_Vossen,
        title     = {A Classification Framework for Beacon Applications},
        author    = {Gottfried Vossen and
                     Stuart Dillon and
                     Fabian Schomm and
                     Florian Stahl},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {1--11},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704245012},
        urn       = {urn:nbn:de:101:1-201704245012},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Beacons have received considerable attention in recent years, which is partially due to the fact that they serve as a flexible and versatile replacement for RFIDs in many applications. However, beacons are mostly considered from a purely technical perspective. This paper provides a conceptual view on application scenarios for beacons and introduces a novel framework for characterizing these. The framework consists of four dimensions: device movement, action trigger, purpose type, and connectivity requirements. Based on these, three archetypical scenarios are described. Finally, event-condition-action rules and online algorithms are used to formalize the backend of a beacon architecture.}
    }
0 citation in 2018

 Open Access 

The mf-index: A Citation-Based Multiple Factor Index to Evaluate and Compare the Output of Scientists

Eric Oberesch, Sven Groppe

Open Journal of Web Technologies (OJWT), 4(1), Pages 1-32, 2017, Downloads: 4784, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-2017070914565 | GNL-LP: 1136555501 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Comparing the output of scientists as objective as possible is an important factor for, e.g., the approval of research funds or the filling of open positions at universities. Numeric indices, which express the scientific output in the form of a concrete value, may not completely supersede an overall view of a researcher, but provide helpful indications for the assessment. This work introduces the most important citation-based indices, analyzes their advantages and disadvantages and provides an overview of the aspects considered by them. On this basis, we identify the criteria that an advanced index should fulfill, and develop a new index, the mf-index. The objective of the mf-index is to combine the benefits of the existing indices, while avoiding as far as possible their drawbacks and to consider additional aspects. Finally, an evaluation based on data of real publications and citations compares the mf-index with existing indices and verifies that its advantages in theory can also be determined in practice.

BibTex:

    @Article{OJWT_2017v4i1n01_Oberesch,
        title     = {The mf-index: A Citation-Based Multiple Factor Index to Evaluate and Compare the Output of Scientists},
        author    = {Eric Oberesch and
                     Sven Groppe},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {1--32},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017070914565},
        urn       = {urn:nbn:de:101:1-2017070914565},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Comparing the output of scientists as objective as possible is an important factor for, e.g., the approval of research funds or the filling of open positions at universities. Numeric indices, which express the scientific output in the form of a concrete value, may not completely supersede an overall view of a researcher, but provide helpful indications for the assessment. This work introduces the most important citation-based indices, analyzes their advantages and disadvantages and provides an overview of the aspects considered by them. On this basis, we identify the criteria that an advanced index should fulfill, and develop a new index, the mf-index. The objective of the mf-index is to combine the benefits of the existing indices, while avoiding as far as possible their drawbacks and to consider additional aspects. Finally, an evaluation based on data of real publications and citations compares the mf-index with existing indices and verifies that its advantages in theory can also be determined in practice.}
    }
1 citation in 2018:

Efficiency vs Effectiveness: Alternative Metrics for Research Performance

Anatoliy G. Goncharuk

Journal of Applied Management and Investments, 7(1), Pages 24-37, 2018.

 Open Access 

Mitigating Radio Interference in Large IoT Networks through Dynamic CCA Adjustment

Tommy Sparber, Carlo Alberto Boano, Salil S. Kanhere, Kay Römer

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 103-113, 2017, Downloads: 7642, Citations: 11

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613511 | GNL-LP: 113782025X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The performance of low-power wireless sensor networks used to build Internet of Things applications often suffers from radio interference generated by co-located wireless devices or from jammers maliciously placed in their proximity. As IoT devices typically operate in unsupervised large-scale installations, and as radio interference is typically localized and hence affects only a portion of the nodes in the network, it is important to give low-power wireless sensors and actuators the ability to autonomously mitigate the impact of surrounding interference. In this paper we present our approach DynCCA, which dynamically adapts the clear channel assessment threshold of IoT devices to minimize the impact of malicious or unintentional interference on both network reliability and energy efficiency. First, we describe how varying the clear channel assessment threshold at run-time using only information computed locally can help to minimize the impact of unintentional interference from surrounding devices and to escape jamming attacks. We then present the design and implementation of DynCCA on top of ContikiMAC and evaluate its performance on wireless sensor nodes equipped with IEEE 802.15.4 radios. Our experimental investigation shows that the use of DynCCA in dense IoT networks can increase the packet reception rate by up to 50% and reduce the energy consumption by a factor of 4.

BibTex:

    @Article{OJIOT_2017v3i1n09_Sparber,
        title     = {Mitigating Radio Interference in Large IoT Networks through Dynamic CCA Adjustment},
        author    = {Tommy Sparber and
                     Carlo Alberto Boano and
                     Salil S. Kanhere and
                     Kay R\~{A}mer},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {103--113},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613511},
        urn       = {urn:nbn:de:101:1-2017080613511},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The performance of low-power wireless sensor networks used to build Internet of Things applications often suffers from radio interference generated by co-located wireless devices or from jammers maliciously placed in their proximity. As IoT devices typically operate in unsupervised large-scale installations, and as radio interference is typically localized and hence affects only a portion of the nodes in the network, it is important to give low-power wireless sensors and actuators the ability to autonomously mitigate the impact of surrounding interference. In this paper we present our approach DynCCA, which dynamically adapts the clear channel assessment threshold of IoT devices to minimize the impact of malicious or unintentional interference on both network reliability and energy efficiency. First, we describe how varying the clear channel assessment threshold at run-time using only information computed locally can help to minimize the impact of unintentional interference from surrounding devices and to escape jamming attacks. We then present the design and implementation of DynCCA on top of ContikiMAC and evaluate its performance on wireless sensor nodes equipped with IEEE 802.15.4 radios. Our experimental investigation shows that the use of DynCCA in dense IoT networks can increase the packet reception rate by up to 50\% and reduce the energy consumption by a factor of 4.}
    }
5 citations in 2018:

Synchronous transmissions + channel sampling = energy efficient event-triggered wireless sensing systems

C. Rojas, J. Decotignie

In 14th International Workshop on Factory Communication Systems (WFCS), Pages 1-10, 2018.

Environment, People, and Time as Factors in the Internet of Things Technical Revolution

Jan Sliwa

In Internet of Things A to Z: Technologies and Applications, Pages 51-76, 2018.

Symbol-Level Cross-Technology Communication via Payload Encoding

Shuai Wang, Song Min Kim, Tian He

In 38th International Conference on Distributed Computing Systems (ICDCS), 2018.

Symbol-Level Cross-Technology Communication via Payload Encoding

Shuai Wang, Song Min Kim, Tian He

In 38th IEEE International Conference on Distributed Computing Systems (ICDCS), Vienna, Austria, Pages 500-510, 2018.

Internet of Things (IoT) Based Home Automation: A Review

Nidhi Singh, Ankita Sharma, Anurag Dwivedi, Nitesh Tiwari

i-Manager's Journal on Digital Signal Processing, 6(4), Pages 34, 2018.

 Open Access 

Sensing as a Service: Secure Wireless Sensor Network Infrastructure Sharing for the Internet of Things

Cintia B. Margi, Renan C. A. Alves, Johanna Sepulveda

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 91-102, 2017, Downloads: 6356, Citations: 9

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613467 | GNL-LP: 1137820209 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Internet of Things (IoT) andWireless Sensor Networks (WSN) are composed of devices capable of sensing/actuation, communication and processing. They are valuable technology for the development of applications in several areas, such as environmental, industrial and urban monitoring and processes controlling. Given the challenges of different protocols and technologies used for communication, resource constrained devices nature, high connectivity and security requirements for the applications, the main challenges that need to be addressed include: secure communication between IoT devices, network resource management and the protected implementation of the security mechanisms. In this paper, we present a secure Software-Defined Networking (SDN) based framework that includes: communication protocols, node task programming middleware, communication and computation resource management features and security services. The communication layer for the constrained devices considers IT-SDN as its basis. Concerning security, we address the main services, the type of algorithms to achieve them, and why their secure implementation is needed. Lastly, we showcase how the Sensing as a Service paradigm could enable WSN usage in more environments.

BibTex:

    @Article{OJIOT_2017v3i1n08_Margi,
        title     = {Sensing as a Service: Secure Wireless Sensor Network Infrastructure Sharing for the Internet of Things},
        author    = {Cintia B. Margi and
                     Renan C. A. Alves and
                     Johanna Sepulveda},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {91--102},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613467},
        urn       = {urn:nbn:de:101:1-2017080613467},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Internet of Things (IoT) andWireless Sensor Networks (WSN) are composed of devices capable of sensing/actuation, communication and processing. They are valuable technology for the development of applications in several areas, such as environmental, industrial and urban monitoring and processes controlling. Given the challenges of different protocols and technologies used for communication, resource constrained devices nature, high connectivity and security requirements for the applications, the main challenges that need to be addressed include: secure communication between IoT devices, network resource management and the protected implementation of the security mechanisms. In this paper, we present a secure Software-Defined Networking (SDN) based framework that includes: communication protocols, node task programming middleware, communication and computation resource management features and security services. The communication layer for the constrained devices considers IT-SDN as its basis. Concerning security, we address the main services, the type of algorithms to achieve them, and why their secure implementation is needed. Lastly, we showcase how the Sensing as a Service paradigm could enable WSN usage in more environments.}
    }
3 citations in 2018:

Sensing-as-a-Service Decentralized Data Access Control Mechanism for Cyber Physical Systems

Pavan Kumar C., Amjad Gawanmeh, Selvakumar R.

In International Conference on Communications Workshops (ICC), Workshops, Kansas City, MO, USA, Pages 1-5, 2018.

An adaptive energy aware strategy based on game theory to add privacy in the physical layer for cognitive WSNs

Elena Romero, Javier Blesa, Alvaro Araujo

Ad Hoc Networks, 2018.

WS3N: Wireless Secure SDN-Based Communication for Sensor Networks

Renan C. A. Alves, Doriedson A. G. Oliveira, Geovandro C. C. F. Pereira, Bruno C. Albertini, Cintia B. Margi

Security and Communication Networks, 2018.

 Open Access 

Multi-Layer Cross Domain Reasoning over Distributed Autonomous IoT Applications

Muhammad Intizar Ali, Pankesh Patel, Soumya Kanti Datta, Amelie Gyrard

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 75-90, 2017, Downloads: 8681, Citations: 5

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613451 | GNL-LP: 1137820195 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Due to the rapid advancements in the sensor technologies and IoT, we are witnessing a rapid growth in the use of sensors and relevant IoT applications. A very large number of sensors and IoT devices are in place in our surroundings which keep sensing dynamic contextual information. A true potential of the wide-spread of IoT devices can only be realized by designing and deploying a large number of smart IoT applications which can provide insights on the data collected from IoT devices and support decision making by converting raw sensor data into actionable knowledge. However, the process of getting value from sensor data streams and converting these raw sensor values into actionable knowledge requires extensive efforts from IoT application developers and domain experts. In this paper, our main aim is to propose a multi-layer cross domain reasoning framework, which can support application developers, end-users and domain experts to automatically understand relevant events and extract actionable knowledge with minimal efforts. Our framework reduces the efforts required for IoT applications development (i) by supporting automated application code generation and access mechanisms using IoTSuite, (ii) by leveraging from Machine-to-Machine Measurement (M3) framework to exploit semantic technologies and domain knowledge, and (iii) by using automated sensor discovery and complex event processing of relevant events (ACEIS Middleware) at the multiple data processing layers and different stages of the IoT application development life cycle. In the essence, our framework supports the end-users and IoT application developers to design innovative IoT applications by reducing the programming efforts, by identifying relevant events and by suggesting potential actions based on complex event processing and reasoning for cross-domain IoT applications.

BibTex:

    @Article{OJIOT_2017v3i1n07_Ali,
        title     = {Multi-Layer Cross Domain Reasoning over Distributed Autonomous IoT Applications},
        author    = {Muhammad Intizar Ali and
                     Pankesh Patel and
                     Soumya Kanti Datta and
                     Amelie Gyrard},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {75--90},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613451},
        urn       = {urn:nbn:de:101:1-2017080613451},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Due to the rapid advancements in the sensor technologies and IoT, we are witnessing a rapid growth in the use of sensors and relevant IoT applications. A very large number of sensors and IoT devices are in place in our surroundings which keep sensing dynamic contextual information. A true potential of the wide-spread of IoT devices can only be realized by designing and deploying a large number of smart IoT applications which can provide insights on the data collected from IoT devices and support decision making by converting raw sensor data into actionable knowledge. However, the process of getting value from sensor data streams and converting these raw sensor values into actionable knowledge requires extensive efforts from IoT application developers and domain experts. In this paper, our main aim is to propose a multi-layer cross domain reasoning framework, which can support application developers, end-users and domain experts to automatically understand relevant events and extract actionable knowledge with minimal efforts. Our framework reduces the efforts required for IoT applications development (i) by supporting automated application code generation and access mechanisms using IoTSuite, (ii) by leveraging from Machine-to-Machine Measurement (M3) framework to exploit semantic technologies and domain knowledge, and (iii) by using automated sensor discovery and complex event processing of relevant events (ACEIS Middleware) at the multiple data processing layers and different stages of the IoT application development life cycle. In the essence, our framework supports the end-users and IoT application developers to design innovative IoT applications by reducing the programming efforts, by identifying relevant events and by suggesting potential actions based on complex event processing and reasoning for cross-domain IoT applications.}
    }
1 citation in 2018:

Developing and Integrating a Semantic Interoperability Testing Tool in F-Interop Platform

S. K. Datta, C. Bonnet, H. Baqa, M. Zhao, F. Le-Gall

In IEEE Region Ten Symposium (Tensymp), Pages 112-117, 2018.

 Open Access 

Rewriting Complex Queries from Cloud to Fog under Capability Constraints to Protect the Users' Privacy

Hannes Grunert, Andreas Heuer

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 31-45, 2017, Downloads: 5019, Citations: 4

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613421 | GNL-LP: 1137820160 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this paper we show how existing query rewriting and query containment techniques can be used to achieve an efficient and privacy-aware processing of queries. To achieve this, the whole network structure, from data producing sensors up to cloud computers, is utilized to create a database machine consisting of billions of devices from the Internet of Things. Based on previous research in the field of database theory, especially query rewriting, we present a concept to split a query into fragment and remainder queries. Fragment queries can operate on resource limited devices to filter and preaggregate data. Remainder queries take these data and execute the last, complex part of the original queries on more powerful devices. As a result, less data is processed and forwarded in the network and the privacy principle of data minimization is accomplished.

BibTex:

    @Article{OJIOT_2017v3i1n04_Grunert,
        title     = {Rewriting Complex Queries from Cloud to Fog under Capability Constraints to Protect the Users' Privacy},
        author    = {Hannes Grunert and
                     Andreas Heuer},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {31--45},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613421},
        urn       = {urn:nbn:de:101:1-2017080613421},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this paper we show how existing query rewriting and query containment techniques can be used to achieve an efficient and privacy-aware processing of queries. To achieve this, the whole network structure, from data producing sensors up to cloud computers, is utilized to create a database machine consisting of billions of devices from the Internet of Things. Based on previous research in the field of database theory, especially query rewriting, we present a concept to split a query into fragment and remainder queries. Fragment queries can operate on resource limited devices to filter and preaggregate data. Remainder queries take these data and execute the last, complex part of the original queries on more powerful devices. As a result, less data is processed and forwarded in the network and the privacy principle of data minimization is accomplished.}
    }
0 citation in 2018

 Open Access 

Semantic Blockchain to Improve Scalability in the Internet of Things

Michele Ruta, Floriano Scioscia, Saverio Ieva, Giovanna Capurso, Eugenio Di Sciascio

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 46-61, 2017, Downloads: 13750, Citations: 48

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613488 | GNL-LP: 1137820225 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Generally scarce computational and memory resource availability is a well known problem for the IoT, whose intrinsic volatility makes complex applications unfeasible. Noteworthy efforts in overcoming unpredictability (particularly in case of large dimensions) are the ones integrating Knowledge Representation technologies to build the so-called Semantic Web of Things (SWoT). In spite of allowed advanced discovery features, transactions in the SWoT still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective: a semantic resource/service discovery layer built upon a basic blockchain infrastructure gains a consensus validation. This paper proposes a novel Service-Oriented Architecture (SOA) based on a semantic blockchain for registration, discovery, selection and payment. Such operations are implemented as smart contracts, allowing distributed execution and trust. Reported experiments early assess the sustainability of the proposal.

BibTex:

    @Article{OJIOT_2017v3i1n05_Ruta,
        title     = {Semantic Blockchain to Improve Scalability in the Internet of Things},
        author    = {Michele Ruta and
                     Floriano Scioscia and
                     Saverio Ieva and
                     Giovanna Capurso and
                     Eugenio Di Sciascio},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {46--61},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613488},
        urn       = {urn:nbn:de:101:1-2017080613488},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Generally scarce computational and memory resource availability is a well known problem for the IoT, whose intrinsic volatility makes complex applications unfeasible. Noteworthy efforts in overcoming unpredictability (particularly in case of large dimensions) are the ones integrating Knowledge Representation technologies to build the so-called Semantic Web of Things (SWoT). In spite of allowed advanced discovery features, transactions in the SWoT still suffer from not viable trust management strategies. Given its intrinsic characteristics, blockchain technology appears as interesting from this perspective: a semantic resource/service discovery layer built upon a basic blockchain infrastructure gains a consensus validation. This paper proposes a novel Service-Oriented Architecture (SOA) based on a semantic blockchain for registration, discovery, selection and payment. Such operations are implemented as smart contracts, allowing distributed execution and trust. Reported experiments early assess the sustainability of the proposal.}
    }
7 citations in 2018:

Semantic-enhanced blockchain technology for smart cities and communities

David Rull Aixa

2018. Universitat Oberta de Catalunya

Emergent models, frameworks, and hardware technologies for Big data analytics

Sven Groppe

The Journal of Supercomputing, 2018.

Bubbles of Trust: A decentralized blockchain-based authentication system for IoT

Mohamed Tahar Hammi, Badis Hammi, Patrick Bellot, Ahmed Serhrouchni

Computers & Security, 78, Pages 126 - 142, 2018.

Blockchain and IoT Integration: A Systematic Survey

Alfonso Panarello, Nachiket Tapas, Giovanni Merlino, Francesco Longo, Antonio Puliafito

Sensors, 18(8), 2018.

Digital-Information Tracking Framework Using Blockchain

Ankur Arora, Monika Arora

Journal of Supply Chain Management Systems, 7(2), Pages 1-7, 2018.

Consortium Blockchain-Based SIFT: Outsourcing Encrypted Feature Extraction in the D2D Network

Xiaoqin Feng, Jianfeng Ma, Tao Feng, Yinbin Miao, Ximeng Liu

IEEE Access, 6, Pages 52248-52260, 2018.

Analysis and study of data security in the Internet of Things paradigm from a Blockchain technology approach

David Rull Aixa

2018. Màster Universitari en Enginyeria de Telecomunicació UOC-URL

 Open Access 

Differentially Private Linear Models for Gossip Learning through Data Perturbation

István Hegedus, Márk Jelasity

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 62-74, 2017, Downloads: 4147, Citations: 1

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613445 | GNL-LP: 1137820187 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Privacy is a key concern in many distributed systems that are rich in personal data such as networks of smart meters or smartphones. Decentralizing the processing of personal data in such systems is a promising first step towards achieving privacy through avoiding the collection of data altogether. However, decentralization in itself is not enough: Additional guarantees such as differential privacy are highly desirable. Here, we focus on stochastic gradient descent (SGD), a popular approach to implement distributed learning. Our goal is to design differentially private variants of SGD to be applied in gossip learning, a decentralized learning framework. Known approaches that are suitable for our scenario focus on protecting the gradient that is being computed in each iteration of SGD. This has the drawback that each data point can be accessed only a small number of times. We propose a solution in which we effectively publish the entire database in a differentially private way so that linear learners could be run that are allowed to access any (perturbed) data point any number of times. This flexibility is very useful when using the method in combination with distributed learning environments. We show empirically that the performance of the obtained model is comparable to that of previous gradient-based approaches and it is even superior in certain scenarios.

BibTex:

    @Article{OJIOT_2017v3i1n06_Hegedus,
        title     = {Differentially Private Linear Models for Gossip Learning through Data Perturbation},
        author    = {Istv\~{A}n Hegedus and
                     M\~{A}rk Jelasity},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {62--74},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613445},
        urn       = {urn:nbn:de:101:1-2017080613445},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Privacy is a key concern in many distributed systems that are rich in personal data such as networks of smart meters or smartphones. Decentralizing the processing of personal data in such systems is a promising first step towards achieving privacy through avoiding the collection of data altogether. However, decentralization in itself is not enough: Additional guarantees such as differential privacy are highly desirable. Here, we focus on stochastic gradient descent (SGD), a popular approach to implement distributed learning. Our goal is to design differentially private variants of SGD to be applied in gossip learning, a decentralized learning framework. Known approaches that are suitable for our scenario focus on protecting the gradient that is being computed in each iteration of SGD. This has the drawback that each data point can be accessed only a small number of times. We propose a solution in which we effectively publish the entire database in a differentially private way so that linear learners could be run that are allowed to access any (perturbed) data point any number of times. This flexibility is very useful when using the method in combination with distributed learning environments. We show empirically that the performance of the obtained model is comparable to that of previous gradient-based approaches and it is even superior in certain scenarios.}
    }
0 citation in 2018

 Open Access 

Latency Optimization in Large-Scale Cloud-Sensor Systems

Adhithya Balasubramanian, Sumi Helal, Yi Xu

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 18-30, 2017, Downloads: 4116, Citations: 1

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613410 | GNL-LP: 1137820152 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: With the advent of the Internet of Things and smart city applications, massive cyber-physical interactions between the applications hosted in the cloud and a huge number of external physical sensors and devices is an inevitable situation. This raises two main challenges: cloud cost affordability as the smart city grows (referred to as economical cloud scalability) and the energy-efficient operation of sensor hardware. We have developed Cloud-Edge-Beneath (CEB), a multi-tier architecture for large-scale IoT deployments, embodying distributed optimizations, which address these two major challenges. In this article, we summarize our prior work on CEB to set context for presenting a third major challenge for cloud sensor-systems, which is latency. Prolonged latency can potentially arise in servicing requests from cloud applications, especially given our primary focus on optimizing energy and cloud scalability. Latency, however, is an important factor to optimize for real-time and cyber-physical applications with limited tolerance to delays. Also, improving the responsiveness of any IoT application is bound to improve the user experience and hence the acceptability and adoption of smart city solutions by the city citizens. In this article, we aim to give a formal definition and formulation for the latency optimization problem under CEB. We propose a Prioritized Application Fragment Caching Algorithm (PAFCA) to selectively cache application fragments from the cloud to lower layers of CEB, as a key measure to optimize latency. The algorithm itself is an extension of one of the existing optimization algorithms of CEB (AFCA-1). As will be shown, PAFCA takes into account the expectations of cloud applications on real-timeliness of responses. Through experiments, we measure and validate the effect of PAFCA on latency and cloud scalability. We also introduce and discuss the trade-off between latency and sensor energy in this given context.

BibTex:

    @Article{OJIOT_2017v3i1n03_Balasubramanian,
        title     = {Latency Optimization in Large-Scale Cloud-Sensor Systems},
        author    = {Adhithya Balasubramanian and
                     Sumi Helal and
                     Yi Xu},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {18--30},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613410},
        urn       = {urn:nbn:de:101:1-2017080613410},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {With the advent of the Internet of Things and smart city applications, massive cyber-physical interactions between the applications hosted in the cloud and a huge number of external physical sensors and devices is an inevitable situation. This raises two main challenges: cloud cost affordability as the smart city grows (referred to as economical cloud scalability) and the energy-efficient operation of sensor hardware. We have developed Cloud-Edge-Beneath (CEB), a multi-tier architecture for large-scale IoT deployments, embodying distributed optimizations, which address these two major challenges. In this article, we summarize our prior work on CEB to set context for presenting a third major challenge for cloud sensor-systems, which is latency. Prolonged latency can potentially arise in servicing requests from cloud applications, especially given our primary focus on optimizing energy and cloud scalability. Latency, however, is an important factor to optimize for real-time and cyber-physical applications with limited tolerance to delays. Also, improving the responsiveness of any IoT application is bound to improve the user experience and hence the acceptability and adoption of smart city solutions by the city citizens. In this article, we aim to give a formal definition and formulation for the latency optimization problem under CEB. We propose a Prioritized Application Fragment Caching Algorithm (PAFCA) to selectively cache application fragments from the cloud to lower layers of CEB, as a key measure to optimize latency. The algorithm itself is an extension of one of the existing optimization algorithms of CEB (AFCA-1). As will be shown, PAFCA takes into account the expectations of cloud applications on real-timeliness of responses. Through experiments, we measure and validate the effect of PAFCA on latency and cloud scalability. We also introduce and discuss the trade-off between latency and sensor energy in this given context.}
    }
0 citation in 2018

 Open Access 

Data Credence in IoT: Vision and Challenges

Vladimir I. Zadorozhny, Prashant Krishnamurthy, Mai Abdelhakim, Konstantinos Pelechrinis, Jiawei Xu

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 114-126, 2017, Downloads: 5627, Citations: 1

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613498 | GNL-LP: 1137820233 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: As the Internet of Things permeates every aspect of human life, assessing the credence or integrity of the data generated by "things" becomes a central exercise for making decisions or in auditing events. In this paper, we present a vision of this exercise that includes the notion of data credence, assessing data credence in an efficient manner, and the use of technologies that are on the horizon for the very large scale Internet of Things.

BibTex:

    @Article{OJIOT_2017v3i1n10_Zadorozhny,
        title     = {Data Credence in IoT: Vision and Challenges},
        author    = {Vladimir I. Zadorozhny and
                     Prashant Krishnamurthy and
                     Mai Abdelhakim and
                     Konstantinos Pelechrinis and
                     Jiawei Xu},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {114--126},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613498},
        urn       = {urn:nbn:de:101:1-2017080613498},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {As the Internet of Things permeates every aspect of human life, assessing the credence or integrity of the data generated by "things" becomes a central exercise for making decisions or in auditing events. In this paper, we present a vision of this exercise that includes the notion of data credence, assessing data credence in an efficient manner, and the use of technologies that are on the horizon for the very large scale Internet of Things.}
    }
0 citation in 2018

 Open Access 

A Highly Scalable IoT Architecture through Network Function Virtualization

Igor Miladinovic, Sigrid Schefer-Wenzl

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 127-135, 2017, Downloads: 7426, Citations: 22

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613543 | GNL-LP: 1137820284 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: As the number of devices for Internet of Things (IoT) is rapidly growing, existing communication infrastructures are forced to continually evolve. The next generation network infrastructure is expected to be virtualized and able to integrate different kinds of information technology resources. Network Functions Virtualization (NFV) is one of the leading concepts facilitating the operation of network services in a scalable manner. In this paper, we present an architecture involving NFV to meet the requirements of highly scalable IoT scenarios. We highlight the benefits and challenges of our approach for IoT stakeholders. Finally, the paper illustrates our vision of how the proposed architecture can be applied in the context of a state-of-the-art high-tech operating room, which we are going to realize in future work.

BibTex:

    @Article{OJIOT_2017v3i1n11_Miladinovic,
        title     = {A Highly Scalable IoT Architecture through Network Function Virtualization},
        author    = {Igor Miladinovic and
                     Sigrid Schefer-Wenzl},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {127--135},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613543},
        urn       = {urn:nbn:de:101:1-2017080613543},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {As the number of devices for Internet of Things (IoT) is rapidly growing, existing communication infrastructures are forced to continually evolve. The next generation network infrastructure is expected to be virtualized and able to integrate different kinds of information technology resources. Network Functions Virtualization (NFV) is one of the leading concepts facilitating the operation of network services in a scalable manner. In this paper, we present an architecture involving NFV to meet the requirements of highly scalable IoT scenarios. We highlight the benefits and challenges of our approach for IoT stakeholders. Finally, the paper illustrates our vision of how the proposed architecture can be applied in the context of a state-of-the-art high-tech operating room, which we are going to realize in future work.}
    }
7 citations in 2018:

NFV enabled IoT architecture for an operating room environment

Igor Miladinovic, Sigrid Schefer-Wenzl

In 4th World Forum on Internet of Things (WF-IoT), Pages 98-102, 2018.

IoT survey: An SDN and fog computing perspective

Ola Salman, Imad Elhajj, Ali Chehab, Ayman Kayssi

Computer Networks, 143, Pages 221 - 246, 2018.

Dynamic Allocation of Smart City Applications

Igor Miladinovic, Sigrid Schefer-Wenzl

Open Journal of Internet Of Things (OJIOT), 4(1), Pages 144-149, 2018.

Powerline Communication for the Smart Grid and Internet of Things - Powerline Narrowband Frequency Channel Characterization Based on the TMS320C2000 C28x Digital Signal Processor

Emmanuel Adebomi Oyekanlu

2018. Dissertation, Drexel University

Osmotic Collaborative Computing for Machine Learning and Cybersecurity Applications in Industrial IoT Networks and Cyber Physical Systems with Gaussian Mixture Models

Emmanuel Oyekanlu

In IEEE 4th International Conference on Collaboration and Internet Computing (CIC), Pages 326-335, 2018.

IoT survey: An SDN and fog computing perspective

Ola Salman, Imad Elhajj, Ali Chehab, Ayman Kayssi

Computer Networks, 143, Pages 221 - 246, 2018.

Implementierung einer IoT-Architektur zur sicheren Übertragung von Sensordaten

Markus Amon, Silvia Schmidt

2018. Master thesis at Fachhochschule Campus Wien

 Open Access 

Towards a Model-driven Performance Prediction Approach for Internet of Things Architectures

Johannes Kroß, Sebastian Voss, Helmut Krcmar

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 136-141, 2017, Downloads: 5145, Citations: 3

Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.

Full-Text: pdf | URN: urn:nbn:de:101:1-2017080613524 | GNL-LP: 1137820268 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Indisputable, security and interoperability play major concerns in Internet of Things (IoT) architectures and applications. In this paper, however, we emphasize the role and importance of performance and scalability as additional, crucial aspects in planning and building sustainable IoT solutions. IoT architectures are complicated system-of-systems that include different developer roles, development processes, organizational units, and a multilateral governance. Its performance is often neglected during development but becomes a major concern at the end of development and results in supplemental efforts, costs, and refactoring. It should not be relied on linearly scaling for such systems only by using up-to-date technologies that may promote such behavior. Furthermore, different security or interoperability choices also have a considerable impact on performance and may result in unforeseen trade-offs. Therefore, we propose and pursue the vision of a model-driven approach to predict and evaluate the performance of IoT architectures early in the system lifecylce in order to guarantee efficient and scalable systems reaching from sensors to business applications.

BibTex:

    @Article{OJIOT_2017v3i1n12_Kross,
        title     = {Towards a Model-driven Performance Prediction Approach for Internet of Things Architectures},
        author    = {Johannes Kro\~{A} and
                     Sebastian Voss and
                     Helmut Krcmar},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {136--141},
        note      = {Special Issue: Proceedings of the International Workshop on Very Large Internet of Things (VLIoT 2017) in conjunction with the VLDB 2017 Conference in Munich, Germany.},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017080613524},
        urn       = {urn:nbn:de:101:1-2017080613524},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Indisputable, security and interoperability play major concerns in Internet of Things (IoT) architectures and applications. In this paper, however, we emphasize the role and importance of performance and scalability as additional, crucial aspects in planning and building sustainable IoT solutions. IoT architectures are complicated system-of-systems that include different developer roles, development processes, organizational units, and a multilateral governance. Its performance is often neglected during development but becomes a major concern at the end of development and results in supplemental efforts, costs, and refactoring. It should not be relied on linearly scaling for such systems only by using up-to-date technologies that may promote such behavior. Furthermore, different security or interoperability choices also have a considerable impact on performance and may result in unforeseen trade-offs. Therefore, we propose and pursue the vision of a model-driven approach to predict and evaluate the performance of IoT architectures early in the system lifecylce in order to guarantee efficient and scalable systems reaching from sensors to business applications.}
    }
0 citation in 2018

 Open Access 

Scalable Generation of Type Embeddings Using the ABox

Mayank Kejriwal, Pedro Szekely

Open Journal of Semantic Web (OJSW), 4(1), Pages 20-34, 2017, Downloads: 4926

Full-Text: pdf | URN: urn:nbn:de:101:1-2017100112160 | GNL-LP: 1140718193 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Structured knowledge bases gain their expressive power from both the ABox and TBox. While the ABox is rich in data, the TBox contains the ontological assertions that are often necessary for logical inference. The crucial links between the ABox and the TBox are served by is-a statements (formally a part of the ABox) that connect instances to types, also referred to as classes or concepts. Latent space embedding algorithms, such as RDF2Vec and TransE, have been used to great effect to model instances in the ABox. Such algorithms work well on large-scale knowledge bases like DBpedia and Geonames, as they are robust to noise and are low-dimensional and real-valued. In this paper, we investigate a supervised algorithm for deriving type embeddings in the same latent space as a given set of entity embeddings. We show that our algorithm generalizes to hundreds of types, and via incremental execution, achieves near-linear scaling on graphs with millions of instances and facts. We also present a theoretical foundation for our proposed model, and the means of validating the model. The empirical utility of the embeddings is illustrated on five partitions of the English DBpedia ABox. We use visualization and clustering to show that our embeddings are in good agreement with the manually curated TBox. We also use the embeddings to perform a soft clustering on 4 million DBpedia instances in terms of the 415 types explicitly participating in is-a relationships in the DBpedia ABox. Lastly, we present a set of results obtained by using the embeddings to recommend types for untyped instances. Our method is shown to outperform another feature-agnostic baseline while achieving 15x speedup without any growth in memory usage.

BibTex:

    @Article{OJSW_2017v4i1n02_Kejriwal,
        title     = {Scalable Generation of Type Embeddings Using the ABox},
        author    = {Mayank Kejriwal and
                     Pedro Szekely},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {20--34},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017100112160},
        urn       = {urn:nbn:de:101:1-2017100112160},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Structured knowledge bases gain their expressive power from both the ABox and TBox. While the ABox is rich in data, the TBox contains the ontological assertions that are often necessary for logical inference. The crucial links between the ABox and the TBox are served by is-a statements (formally a part of the ABox) that connect instances to types, also referred to as classes or concepts. Latent space embedding algorithms, such as RDF2Vec and TransE, have been used to great effect to model instances in the ABox. Such algorithms work well on large-scale knowledge bases like DBpedia and Geonames, as they are robust to noise and are low-dimensional and real-valued. In this paper, we investigate a supervised algorithm for deriving type embeddings in the same latent space as a given set of entity embeddings. We show that our algorithm generalizes to hundreds of types, and via incremental execution, achieves near-linear scaling on graphs with millions of instances and facts. We also present a theoretical foundation for our proposed model, and the means of validating the model. The empirical utility of the embeddings is illustrated on five partitions of the English DBpedia ABox. We use visualization and clustering to show that our embeddings are in good agreement with the manually curated TBox. We also use the embeddings to perform a soft clustering on 4 million DBpedia instances in terms of the 415 types explicitly participating in is-a relationships in the DBpedia ABox. Lastly, we present a set of results obtained by using the embeddings to recommend types for untyped instances. Our method is shown to outperform another feature-agnostic baseline while achieving 15x speedup without any growth in memory usage.}
    }
0 citations in 2018

 Open Access 

Machine Learning on Large Databases: Transforming Hidden Markov Models to SQL Statements

Dennis Marten, Andreas Heuer

Open Journal of Databases (OJDB), 4(1), Pages 22-42, 2017, Downloads: 6084, Citations: 16

Full-Text: pdf | URN: urn:nbn:de:101:1-2017100112181 | GNL-LP: 1140718215 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper.

BibTex:

    @Article{OJDB_2017v4i1n02_Marten,
        title     = {Machine Learning on Large Databases: Transforming Hidden Markov Models to SQL Statements},
        author    = {Dennis Marten and
                     Andreas Heuer},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {22--42},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017100112181},
        urn       = {urn:nbn:de:101:1-2017100112181},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Machine Learning is a research field with substantial relevance for many applications in different areas. Because of technical improvements in sensor technology, its value for real life applications has even increased within the last years. Nowadays, it is possible to gather massive amounts of data at any time with comparatively little costs. While this availability of data could be used to develop complex models, its implementation is often narrowed because of limitations in computing power. In order to overcome performance problems, developers have several options, such as improving their hardware, optimizing their code, or use parallelization techniques like the MapReduce framework. Anyhow, these options might be too cost intensive, not suitable, or even too time expensive to learn and realize. Following the premise that developers usually are not SQL experts we would like to discuss another approach in this paper: using transparent database support for Big Data Analytics. Our aim is to automatically transform Machine Learning algorithms to parallel SQL database systems. In this paper, we especially show how a Hidden Markov Model, given in the analytics language R, can be transformed to a sequence of SQL statements. These SQL statements will be the basis for a (inter-operator and intra-operator) parallel execution on parallel DBMS as a second step of our research, not being part of this paper.}
    }
4 citations in 2018:

Stonebraker gegen Google: Das 2: 0 fällt in Rostock

Daniel Dietrich, Ole Fenske, Stefan Schomacker, Philipp Schweers, Andreas Heuer

In 30th GI-Workshop on Foundations of Databases (Grundlagen von Datenbanken), Wuppertal, Germany, 2018.

Inverses in Research Data Management: Combining Provenance Management, Schema and Data Evolution

Tanja Auge, Andreas Heuer

In 30th GI-Workshop on Foundations of Databases (Grundlagen von Datenbanken), Wuppertal, Germany, 2018.

Inverse im Forschungsdatenmanagement

Tanja Auge, Andreas Heuer

In Proceedings of 30th GI-Workshop on Foundations of Databases (Grundlagen von Datenbanken), Wuppertal, Germany, 2018.

Exposé eines Promotionsprojektes: Provenance Management für Data-Science-Anwendungen unter Berücksichtigung von Daten- und Schema-Evolution

Tanja Auge

In Technical Report, Universität Rostock, 2018.

 Open Access 

Performance Aspects of Object-based Storage Services on Single Board Computers

Christian Baun, Henry-Norbert Cocos, Rosa-Maria Spanou

Open Journal of Cloud Computing (OJCC), 4(1), Pages 1-16, 2017, Downloads: 6179

Full-Text: pdf | URN: urn:nbn:de:101:1-2017100112204 | GNL-LP: 1140718231 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: When an object-based storage service is demanded and the cost for purchase and operation of servers, workstations or personal computers is a challenge, single board computers may be an option to build an inexpensive system. This paper describes the lessons learned from deploying different private cloud storage services, which implement the functionality and API of the Amazon Simple Storage Service on a single board computer, the development of a lightweight tool to investigate the performance and an analysis of the archived measurement data. The objective of the performance evaluation is to get an impression, if it is possible and useful to deploy object-based storage services on single board computers.

BibTex:

    @Article{OJCC_2017v4i1n01_Baun,
        title     = {Performance Aspects of Object-based Storage Services on Single Board Computers},
        author    = {Christian Baun and
                     Henry-Norbert Cocos and
                     Rosa-Maria Spanou},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {1--16},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017100112204},
        urn       = {urn:nbn:de:101:1-2017100112204},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {When an object-based storage service is demanded and the cost for purchase and operation of servers, workstations or personal computers is a challenge, single board computers may be an option to build an inexpensive system. This paper describes the lessons learned from deploying different private cloud storage services, which implement the functionality and API of the Amazon Simple Storage Service on a single board computer, the development of a lightweight tool to investigate the performance and an analysis of the archived measurement data. The objective of the performance evaluation is to get an impression, if it is possible and useful to deploy object-based storage services on single board computers.}
    }
0 citations in 2018

 Open Access 

Security and Compliance Ontology for Cloud Service Agreements

Ana Sofía Zalazar, Luciana Ballejos, Sebastian Rodriguez

Open Journal of Cloud Computing (OJCC), 4(1), Pages 17-25, 2017, Downloads: 5027, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-2017100112242 | GNL-LP: 1140718274 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Cloud computing is a business paradigm where two important roles must be defined: provider and consumer. Providers offer services (e.g. web application, web services, and databases) and consumers pay for using them. The goal of this research is to focus on security and compliance aspects of cloud service. An ontology is introduced, which is the conceptualization of cloud domain, for analyzing different compliance aspects of cloud agreements. The terms, properties and relations are shown in a diagram. The proposed ontology can help service consumers to extract relevant data from service level agreements, to interpret compliance regulations, and to compare different contractual terms. Finally, some recommendations are presented for cloud consumers to adopt services and evaluate security risks.

BibTex:

    @Article{OJCC_2017v4i1n02_Zalazar,
        title     = {Security and Compliance Ontology for Cloud Service Agreements},
        author    = {Ana Sof\~{A}a Zalazar and
                     Luciana Ballejos and
                     Sebastian Rodriguez},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {17--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017100112242},
        urn       = {urn:nbn:de:101:1-2017100112242},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Cloud computing is a business paradigm where two important roles must be defined: provider and consumer. Providers offer services (e.g. web application, web services, and databases) and consumers pay for using them. The goal of this research is to focus on security and compliance aspects of cloud service. An ontology is introduced, which is the conceptualization of cloud domain, for analyzing different compliance aspects of cloud agreements. The terms, properties and relations are shown in a diagram. The proposed ontology can help service consumers to extract relevant data from service level agreements, to interpret compliance regulations, and to compare different contractual terms. Finally, some recommendations are presented for cloud consumers to adopt services and evaluate security risks.}
    }
2 citations in 2018:

Cyber Supply Chain Risks in Cloud Computing - Bridging the Risk Assessment Gap

Olusola Akinrolabu, Steve New, Andrew Martin

Open Journal of Cloud Computing (OJCC), 5(1), Pages 1-19, 2018.

Revisión Bibliográfica de la Literatura de Ingeniería de Requerimientos para Cloud Computing

Ana Sofía Zalazar, Luciana Ballejos, Sebastian Rodriguez

In 6to Congreso Nacional de Ingeniería en Informática/Sistemas de Información, Buenos Aires, Argentina, 2018.

 Open Access 

Technology Selection for Big Data and Analytical Applications

Denis Lehmann, David Fekete, Gottfried Vossen

Open Journal of Big Data (OJBD), 3(1), Pages 1-25, 2017, Downloads: 4848, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201711266876 | GNL-LP: 1147192790 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The term Big Data has become pervasive in recent years, as smart phones, televisions, washing machines, refrigerators, smart meters, diverse sensors, eyeglasses, and even clothes connect to the Internet. However, their generated data is essentially worthless without appropriate data analytics that utilizes information retrieval, statistics, as well as various other techniques. As Big Data is commonly too big for a single person or institution to investigate, appropriate tools are being used that go way beyond a traditional data warehouse and that have been developed in recent years. Unfortunately, there is no single solution but a large variety of different tools, each of which with distinct functionalities, properties and characteristics. Especially small and medium-sized companies have a hard time to keep track, as this requires time, skills, money, and specific knowledge that, in combination, result in high entrance barriers for Big Data utilization. This paper aims to reduce these barriers by explaining and structuring different classes of technologies and the basic criteria for proper technology selection. It proposes a framework that guides especially small and mid-sized companies through a suitable selection process that can serve as a basis for further advances.

BibTex:

    @Article{OJBD_2017v3n01_Lehmann,
        title     = {Technology Selection for Big Data and Analytical Applications},
        author    = {Denis Lehmann and
                     David Fekete and
                     Gottfried Vossen},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {1--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201711266876},
        urn       = {urn:nbn:de:101:1-201711266876},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The term Big Data has become pervasive in recent years, as smart phones, televisions, washing machines, refrigerators, smart meters, diverse sensors, eyeglasses, and even clothes connect to the Internet. However, their generated data is essentially worthless without appropriate data analytics that utilizes information retrieval, statistics, as well as various other techniques. As Big Data is commonly too big for a single person or institution to investigate, appropriate tools are being used that go way beyond a traditional data warehouse and that have been developed in recent years. Unfortunately, there is no single solution but a large variety of different tools, each of which with distinct functionalities, properties and characteristics. Especially small and medium-sized companies have a hard time to keep track, as this requires time, skills, money, and specific knowledge that, in combination, result in high entrance barriers for Big Data utilization. This paper aims to reduce these barriers by explaining and structuring different classes of technologies and the basic criteria for proper technology selection. It proposes a framework that guides especially small and mid-sized companies through a suitable selection process that can serve as a basis for further advances.}
    }
0 citation in 2018

 Open Access 

Ontology-Based Data Integration in Multi-Disciplinary Engineering Environments: A Review

Fajar J. Ekaputra, Marta Sabou, Estefanía Serral, Elmar Kiesling, Stefan Biffl

Open Journal of Information Systems (OJIS), 4(1), Pages 1-26, 2017, Downloads: 6856, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201711266863 | GNL-LP: 1147192413 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Today's industrial production plants are complex mechatronic systems. In the course of the production plant lifecycle, engineers from a variety of disciplines (e.g., mechanics, electronics, automation) need to collaborate in multi-disciplinary settings that are characterized by heterogeneity in terminology, methods, and tools. This collaboration yields a variety of engineering artifacts that need to be linked and integrated, which on the technical level is reflected in the need to integrate heterogeneous data. Semantic Web technologies, in particular ontologybased data integration (OBDI), are promising to tackle this challenge that has attracted strong interest from the engineering research community. This interest has resulted in a growing body of literature that is dispersed across the Semantic Web and Automation System Engineering research communities and has not been systematically reviewed so far. We address this gap with a survey reflecting on OBDI applications in the context of Multi-Disciplinary Engineering Environment (MDEE). To this end, we analyze and compare 23 OBDI applications from both the Semantic Web and the Automation System Engineering research communities. Based on this analysis, we (i) categorize OBDI variants used in MDEE, (ii) identify key problem context characteristics, (iii) compare strengths and limitations of OBDI variants as a function of problem context, and (iv) provide recommendation guidelines for the selection of OBDI variants and technologies for OBDI in MDEE.

BibTex:

    @Article{OJIS_2017v4i1n01_Ekaputra,
        title     = {Ontology-Based Data Integration in Multi-Disciplinary Engineering Environments: A Review},
        author    = {Fajar J. Ekaputra and
                     Marta Sabou and
                     Estefan\~{A}a Serral and
                     Elmar Kiesling and
                     Stefan Biffl},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {1--26},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201711266863},
        urn       = {urn:nbn:de:101:1-201711266863},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Today's industrial production plants are complex mechatronic systems. In the course of the production plant lifecycle, engineers from a variety of disciplines (e.g., mechanics, electronics, automation) need to collaborate in multi-disciplinary settings that are characterized by heterogeneity in terminology, methods, and tools. This collaboration yields a variety of engineering artifacts that need to be linked and integrated, which on the technical level is reflected in the need to integrate heterogeneous data. Semantic Web technologies, in particular ontologybased data integration (OBDI), are promising to tackle this challenge that has attracted strong interest from the engineering research community. This interest has resulted in a growing body of literature that is dispersed across the Semantic Web and Automation System Engineering research communities and has not been systematically reviewed so far. We address this gap with a survey reflecting on OBDI applications in the context of Multi-Disciplinary Engineering Environment (MDEE). To this end, we analyze and compare 23 OBDI applications from both the Semantic Web and the Automation System Engineering research communities. Based on this analysis, we (i) categorize OBDI variants used in MDEE, (ii) identify key problem context characteristics, (iii) compare strengths and limitations of OBDI variants as a function of problem context, and (iv) provide recommendation guidelines for the selection of OBDI variants and technologies for OBDI in MDEE.}
    }
2 citations in 2018:

Overview of Software Development Topics for the Digitalization of Industry

Diana Peters, Philipp Matthias Schäfer

2018. Deutsches Zentrum für Luft- und Raumfahrt (DLR): DLR-IB-DW-JE-2018-26

Self-Improving Additive Manufacturing Knowledge Management

Yan Lu, Zhuo Yang, Douglas Eddy, Sundar Krishnamurty

In ASME International Design Engineering Technical Conferences and Computers and Information in Engineering Conference, Volume 1B: 38th Computers and Information in Engineering Conference, Quebec City, Quebec, Canada, 2018.

 Open Access 

Purposeful Searching for Citations of Scholarly Publications

Fabian Rosenthal, Sven Groppe

Open Journal of Information Systems (OJIS), 4(1), Pages 27-48, 2017, Downloads: 5995, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201711266882 | GNL-LP: 1147193223 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Citation data contains the citations among scholarly publications. The data can be used to find relevant sources during research, identify emerging trends and research areas, compute metrics for comparing authors or journals, or for thematic clustering. Manual administration of citation data is limited due to the large number of publications. In this work, we hence lay the foundations for the automatic search for scientific citations. The unique characteristics are a purposeful search of citations for a specified set of publications (of e.g., an author or an institute). Therefore, search strategies will be developed and evaluated in this work in order to reduce the costs for the analysis of documents without citations to the given set of publications. In our experiments, for authors with more than 100 publications about 75 % of the citations were found. The purposeful strategy examined thereby only 1.5 % of the 120 million publications of the used data set.

BibTex:

    @Article{OJIS_2017v4i1n02_Rosenthal,
        title     = {Purposeful Searching for Citations of Scholarly Publications},
        author    = {Fabian Rosenthal and
                     Sven Groppe},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {27--48},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201711266882},
        urn       = {urn:nbn:de:101:1-201711266882},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Citation data contains the citations among scholarly publications. The data can be used to find relevant sources during research, identify emerging trends and research areas, compute metrics for comparing authors or journals, or for thematic clustering. Manual administration of citation data is limited due to the large number of publications. In this work, we hence lay the foundations for the automatic search for scientific citations. The unique characteristics are a purposeful search of citations for a specified set of publications (of e.g., an author or an institute). Therefore, search strategies will be developed and evaluated in this work in order to reduce the costs for the analysis of documents without citations to the given set of publications. In our experiments, for authors with more than 100 publications about 75 \% of the citations were found. The purposeful strategy examined thereby only 1.5 \% of the 120 million publications of the used data set.}
    }
0 citation in 2018

 Open Access 

A Semantic Safety Check System for Emergency Management

Yogesh Pandey, Srividya K. Bansal

Open Journal of Semantic Web (OJSW), 4(1), Pages 35-50, 2017, Downloads: 6778, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201711266890 | GNL-LP: 1147193460 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There has been an exponential growth and availability of both structured and unstructured data that can be leveraged to provide better emergency management in case of natural disasters and humanitarian crises. This paper is an extension of a semantics-based web application for safety check, which uses of semantic web technologies to extract different kinds of relevant data about a natural disaster and alerts its users. The goal of this work is to design and develop a knowledge intensive application that identifies those people that may have been affected due to natural disasters or man-made disasters at any geographical location and notify them with safety instructions. This involves extraction of data from various sources for emergency alerts, weather alerts, and contacts data. The extracted data is integrated using a semantic data model and transformed into semantic data. Semantic reasoning is done through rules and queries. This system is built using front-end web development technologies and at the back-end using semantic web technologies such as RDF, OWL, SPARQL, Apache Jena, TDB, and Apache Fuseki server. We present the details of the overall approach, process of data collection and transformation and the system built. This extended version includes a detailed discussion of the semantic reasoning module, research challenges in building this software system, related work in this area, and future research directions including the incorporation of geospatial components and standards.

BibTex:

    @Article{OJSW_2017v4i1n03_Pandey,
        title     = {A Semantic Safety Check System for Emergency Management},
        author    = {Yogesh Pandey and
                     Srividya K. Bansal},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2017},
        volume    = {4},
        number    = {1},
        pages     = {35--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201711266890},
        urn       = {urn:nbn:de:101:1-201711266890},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There has been an exponential growth and availability of both structured and unstructured data that can be leveraged to provide better emergency management in case of natural disasters and humanitarian crises. This paper is an extension of a semantics-based web application for safety check, which uses of semantic web technologies to extract different kinds of relevant data about a natural disaster and alerts its users. The goal of this work is to design and develop a knowledge intensive application that identifies those people that may have been affected due to natural disasters or man-made disasters at any geographical location and notify them with safety instructions. This involves extraction of data from various sources for emergency alerts, weather alerts, and contacts data. The extracted data is integrated using a semantic data model and transformed into semantic data. Semantic reasoning is done through rules and queries. This system is built using front-end web development technologies and at the back-end using semantic web technologies such as RDF, OWL, SPARQL, Apache Jena, TDB, and Apache Fuseki server. We present the details of the overall approach, process of data collection and transformation and the system built. This extended version includes a detailed discussion of the semantic reasoning module, research challenges in building this software system, related work in this area, and future research directions including the incorporation of geospatial components and standards.}
    }
1 citation in 2018:

Applying Semantic Web Technologies for Decision Support in Climate-Related Crisis Management

Efstratios Kontopoulos, Panagiotis Mitzias, Stamatia Dasiopoulou, Jürgen Moßgraber, Simon Mille, Philipp Hertweck, Tobias Hellmund, Anastasios Karakostas, Stefanos Vrochidis, Leo Wanner, Ioannis Kompatsiaris

In 2nd International Conference Citizen Observatories for natural hazards and Water Management, Venice, Italy, 2018.

 Open Access 

Combining Process Guidance and Industrial Feedback for Successfully Deploying Big Data Projects

Christophe Ponsard, Mounir Touzani, Annick Majchrowski

Open Journal of Big Data (OJBD), 3(1), Pages 26-41, 2017, Downloads: 5908, Citations: 7

Full-Text: pdf | URN: urn:nbn:de:101:1-201712245446 | GNL-LP: 1149497165 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Companies are faced with the challenge of handling increasing amounts of digital data to run or improve their business. Although a large set of technical solutions are available to manage such Big Data, many companies lack the maturity to manage that kind of projects, which results in a high failure rate. This paper aims at providing better process guidance for a successful deployment of Big Data projects. Our approach is based on the combination of a set of methodological bricks documented in the literature from early data mining projects to nowadays. It is complemented by learned lessons from pilots conducted in different areas (IT, health, space, food industry) with a focus on two pilots giving a concrete vision of how to drive the implementation with emphasis on the identification of values, the definition of a relevant strategy, the use of an Agile follow-up and a progressive rise in maturity.

BibTex:

    @Article{OJBD_2017v3i1n02_Ponsard,
        title     = {Combining Process Guidance and Industrial Feedback for Successfully Deploying Big Data Projects},
        author    = {Christophe Ponsard and
                     Mounir Touzani and
                     Annick Majchrowski},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2017},
        volume    = {3},
        number    = {1},
        pages     = {26--41},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201712245446},
        urn       = {urn:nbn:de:101:1-201712245446},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Companies are faced with the challenge of handling increasing amounts of digital data to run or improve their business. Although a large set of technical solutions are available to manage such Big Data, many companies lack the maturity to manage that kind of projects, which results in a high failure rate. This paper aims at providing better process guidance for a successful deployment of Big Data projects. Our approach is based on the combination of a set of methodological bricks documented in the literature from early data mining projects to nowadays. It is complemented by learned lessons from pilots conducted in different areas (IT, health, space, food industry) with a focus on two pilots giving a concrete vision of how to drive the implementation with emphasis on the identification of values, the definition of a relevant strategy, the use of an Agile follow-up and a progressive rise in maturity.}
    }
0 citation in 2018