RonPub

Loading...

RonPub Banner

RonPub -- Research Online Publishing

RonPub (Research online Publishing) is an academic publisher of online, open access, peer-reviewed journals.  RonPub aims to provide a platform for researchers, developers, educators, and technical managers to share and exchange their research results worldwide.

RonPub Is Open Access:

RonPub publishes all of its journals under the open access model, defined under BudapestBerlin, and Bethesda open access declarations:

  • All articles published by RonPub is fully open access and online available to readers free of charge.  
  • All open access articles are distributed under  Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction free of charge in any medium, provided that the original work is properly cited. 
  • Authors retain all copyright to their work.
  • Authors may also publish the publisher's version of their paper on any repository or website. 

RonPub Is Cost-Effective:

To be able to provide open access journals, RonPub defray publishing cost by charging a one-time publication fee for each accepted article. One of RonPub objectives is providing a fast and high-quality but lower-cost publishing service. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors who do not have funds to cover publication fees. We also offer a partial fee waiver for editors and reviewers of RonPub as as reward for their work. See the respective Journal webpage for the concrete publication fee.

RonPub Publication Criteria

What we are most concerned about is the quality, not quantity, of publications. We only publish high-quality scholarly papers. Publication Criteria describes the criteria that should be met for a contribution to be acceptable for publication in RonPub journals.

RonPub Publication Ethics Statement:

In order to ensure the publishing quality and the reputation of the publisher, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

RonPub follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Long-Term Preservation in the German National Library

Our publications are archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete.

Where is RonPub?

RonPub is a registered corporation in Lübeck, Germany. Lübeck is a beautiful coastal city, owing wonderful sea resorts and sandy beaches as well as good restaurants. It is located in northern Germany and is 60 kilometer away from Hamburg.

For Authors

Manuscript Preparation

Authors should first read the author guidelines of the corresponding journal. Manuscripts must be prepared using the manuscript template of the respective journal. It is available as word and latex version for download at the Author Guidelines of the corresponding journal page. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts via the submit page of the corresponding journal. Authors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as PDF file and word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts within a few days. Subsequent enquiries concerning paper progress should be made to the corresponding editorial office (see individual journal webpage for concrete contact information).

Review Procedure

RonPub is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in RonPub journals are strictly and thoroughly peer-reviewed. When a manuscript is submitted to a RonPub journal, the editor-in-chief of the journal assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A new round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

For Editors

About RonPub

RonPub is academic publisher of online, open access, peer-reviewed journals. All articles published by RonPub is fully open access and online available to readers free of charge.

RonPub is located in Lübeck, Germany. Lübeck is a beautiful harbour city, 60 kilometer away from Hamburg.

Editor-in-Chief Responsibilities

The Editor-in-Chief of each journal is mainly responsible for the scientific quality of the journal and for assisting in the management of the journal. The Editor-in-Chief suggests topics for the journal, invites distinguished scientists to join the editorial board, oversees the editorial process, and makes the final decision whether a paper can be published after peer-review and revisions.

As a reward for the work of a Editor-in-Chief, the Editor-in-Chief will obtain a 25% discount of the standard publication fee for her/his papers (the Editor-in-Chief is one of authors) published in any of RonPub journals.

Editors’ Responsibilities

Editors assist the Editor-in-Chief in the scientific quality and in decision about topics of the journal. Editors are also encouraged to help to promote the journal among their peers and at conferences. An editor invites at least three reviewers to review a manuscript, but may also review him-/herself the manuscript. After carefully evaluating the review reports and the manuscript itself, the editor makes a commendation about the status of the manuscript. The editor's evaluation as well as the review reports are then sent to EiC, who make the final decision whether a paper can be published after peer-review and revisions. 

The communication with Editorial Board members is done primarily by E-mail, and the Editors are expected to respond within a few working days on any question sent by the Editorial Office so that manuscripts can be processed in a timely fashion. If an editor does not respond or cannot process the work in time, and under some special situations, the editorial office may forward the requests to the Publishers or Editor-in-Chief, who will take the decision directly.

As a reward for the work of editors, an editor will obtain a 25% discount of the standard publication fee for her/his papers (the editor is one of authors) published in any of RonPub journals.

Guest Editors’ Responsibilities

Guest Editors are responsible of the scientific quality of their special issues. Guest Editors will be in charge of inviting papers, of supervising the refereeing process (each paper should be reviewed at least by three reviewers), and of making decisions on the acceptance of manuscripts submitted to their special issue. As regular issues, all accepted papers by (guest) editors will be sent to the EiC of the journal, who will check the quality of the papers, and make the final decsion whether a paper can be published.

Our editorial office will have the right directly asking authors to revise their paper if there are quality issues, e.g. weak quality of writing, and missing information. Authors are required to revise their paper several times if necessary. A paper accepted by it's quest editor may be rejected by the EiC of the journal due to a low quality. However, this occurs only when authors do not really take efforts to revise their paper. A high-quality publication needs the common efforts from the journal, reviewers, editors, editor-in-chief and authors.

The Guest Editors are also expected to write an editorial paper for the special issue. As a reward for work, all guest editors and reviewers working on a special issue will obtain a 25% discount of the standard publication fee for any of their papers published in any of RonPub journals for one year.

Reviewers’ Responsiblity

A reviewer is mainly responsible for reviewing of manuscripts, writing reviewing report and suggesting acception or deny of manuscripts. Reviews are encouraged to provide input about the quality and management of the journal, and help promote the journal among their peers and at conferences.  

Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member. 

As a reward for the reviewing work, a reviewer will obtain a 25% discount of the standard publication fee for her/his papers (the review is one of authors) published in any of RonPub journals.

Launching New Journals

RonPub always welcomes suggestions for new open access journals in any research area. We are also open for publishing collaborations with research societies. Please send your proposals for new journals or for publishing collaboration to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Publication Criteria

This part provides important information for both the scientific committees and authors.

Ethic Requirement:

For scientific committees: Each editor and reviewer should conduct the evaluation of manuscripts objectively and fairly.
For authors: Authors should present their work honestly without fabrication, falsification, plagiarism or inappropriate data manipulation.

Pre-Check:

In order to filter fabricated submissions, the editorial office will check the authenticity of the authors and their affiliations before a peer-review begins. It is important that the authors communicate with us using the email addresses of their affiliations and provide us the URL addresses of their affiliations. To verify the originality of submissions, we use various plagiarism detection tools to check the content of manuscripts submitted to our journal against existing publications. The overall quality of paper will be also checked including format, figures, tables, integrity and adequacy. Authors may be required to improve the quality of their paper before sending it out for review. If a paper is obviously of low quality, the paper will be directly rejected.

Acceptance Criteria:

The criteria for acceptance of manuscripts are the quality of work. This will concretely be reflected in the following aspects:

  • Novelty and Practical Impact
  • Technical Soundness
  • Appropriateness and Adequacy of 
    • Literature Review
    • Background Discussion
    • Analysis of Issues
  • Presentation, including 
    • Overall Organization 
    • English 
    • Readability

For a contribution to be acceptable for publication, these points should be at least in middle level.

Guidelines for Rejection:

  • If the work described in the manuscript has been published, or is under consideration for publication anywhere else, it will not be evaluated.
  • If the work is a plagiarism, or contains data falsification or fabrication, it will be rejected.
  • Manuscripts, which have seriously technical flaws, will not be accepted.

Call for Journals

Research Online Publishing (RonPub, www.ronpub.com) is a publisher of online, open access and peer-reviewed scientific journals.  For more information about RonPub please visit this link.

RonPub always welcomes suggestions for new journals in any research area. Please send your proposals for journals along with your Curriculum Vitae to This email address is being protected from spambots. You need JavaScript enabled to view it. .

We are also open for publishing collaborations with research societies. Please send your publishing collaboration also to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Be an Editor / Be a Reviewer

RonPub always welcomes qualified academicians and practitioners to join as editors and reviewers. Being an editor/a reviewer is a matter of prestige and personnel achievement. Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member.

If you would like to participate as a scientific committee member of any of RonPub journals, please send an email to This email address is being protected from spambots. You need JavaScript enabled to view it. with your curriculum vitae. We will revert back as soon as possible. For more information about editors/reviewers, please visit this link.

Contact RonPub

Location

RonPub UG (haftungsbeschränkt)
Hiddenseering 30
23560 Lübeck
Germany

Comments and Questions

For general inquiries, please e-mail to This email address is being protected from spambots. You need JavaScript enabled to view it. .

For specific questions on a certain journal, please visit the corresponding journal page to see the email address.

RonPub's Transparent Impact Factor of the Year 2017: 2.19

There are numerous criticisms on the use of impact factors and debates about the validity of the impact factor as a measure of journal importance [1, 2, 3, 5, 6, 8, 9]. Several national-level institutions like the German Research Foundation [4] and Science and the Technology Select Committee [7] of the United Kingdom urge their funding councils to only evaluate the quality of individual articles, not the reputation of the journal in which they are published. Nevertherless, we are sometimes asked about the impact factors of our journals. Therefore, we provide here the impact factors for readers who are still interested in impact factors. Our impact factors are calculated in the same way as the one of Thomson Reuters, but the impact factors for our journals are not computed by the company Thomson Reuters and they are computed by ourselves and can be validated by anyone, because we present all data for computing the impact factor (to anyone asking neither for registration nor for fees). These data are provided here and each reader can re-compute and check the calculation of these impact factors. Therefore, we call our impact factor Transparent Impact Factor.

For the calculation of the Impact Factor of an year Y we need the number A of articles published in the years Y-1 and Y-2 (excluding editorials). Furthemore, we determine the number of citations B in the year Y, which cite articles of RonPub published in the years Y-1 or Y-2. The (2-Years) Transparent Impact Factor is then determined by B/A.

There are A := 48 articles published in the years 2015 and 2016. These articles received B := 105 citations in scientific contributions published in 2017. These citations are listed below.

Therefore, the (2-Years) Transparent Impact Factor for the year 2017 is B/A = 2.19

References

  1. Björn Brembs, Katherine Button and Marcus Munafò. Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience, 7 (291): 1–12, 2013.
  2. Ewen Callaway. Beat it, impact factor! Publishing elite turns against controversial metric. Nature, 535 (7611): 210–211, 2016.
  3. Masood Fooladi, Hadi Salehi, Melor Md Yunus, Maryam Farhadi, Arezoo Aghaei Chadegani, Hadi Farhadi, Nader Ale Ebrahim. Does Criticisms Overcome the Praises of Journal Impact Factor? Asian Social Science, 9 (5), 2013.
  4. German Research Foundation, "Quality not Quantity" – DFG Adopts Rules to Counter the Flood of Publications in Research, Press Release No. 7, 2010.
  5. Khaled Moustafa. The disaster of the impact factor. Science and Engineering Ethics, 21 (1): 139–142, 2015.
  6. Mike Rossner, Heather Van Epps, Emma Hill. Show me the data. Journal of Cell Biology, 179 (6): 1091–2, 2007.
  7. Science and Technology Committee, Scientific Publications: Free for all? Tenth Report of the Science and Technology Committee of the House of Commons, 2004.
  8. Maarten van Wesel. Evaluation by Citation: Trends in Publication Behavior, Evaluation Criteria, and the Strive for High Impact Publications. Science and Engineering Ethics, 22 (1): 199–225, 2016.
  9. Time to remodel the journal impact factor. Nature, 535 (466), 2016.

Citations

This list of citations may not be complete. Please contact us, if citations are missing. There might be errors in the citation data due to automatic processing.

 Open Access 

Deriving Bounds on the Size of Spatial Areas

Erik Buchmann, Patrick Erik Bradley, Klemens Böhm

Open Journal of Databases (OJDB), 2(1), Pages 1-16, 2015, Downloads: 12484

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194566 | GNL-LP: 113236082X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Many application domains such as surveillance, environmental monitoring or sensor-data processing need upper and lower bounds on areas that are covered by a certain feature. For example, a smart-city infrastructure might need bounds on the size of an area polluted with fine-dust, to re-route combustion-engine traffic. Obtaining such bounds is challenging, because in almost any real-world application, information about the region of interest is incomplete, e.g., the database of sensor data contains only a limited number of samples. Existing approaches cannot provide upper and lower bounds or depend on restrictive assumptions, e.g., the area must be convex. Our approach in turn is based on the natural assumption that it is possible to specify a minimal diameter for the feature in question. Given this assumption, we formally derive bounds on the area size, and we provide algorithms that compute these bounds from a database of sensor data, based on geometrical considerations. We evaluate our algorithms both with a real-world case study and with synthetic data.

BibTex:

    @Article{OJDB-2015v2i1n01_Buchmann,
        title     = {Deriving Bounds on the Size of Spatial Areas},
        author    = {Erik Buchmann and
                     Patrick Erik Bradley and
                     Klemens B\~{A}hm},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {1--16},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194566},
        urn       = {urn:nbn:de:101:1-201705194566},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Many application domains such as surveillance, environmental monitoring or sensor-data processing need upper and lower bounds on areas that are covered by a certain feature. For example, a smart-city infrastructure might need bounds on the size of an area polluted with fine-dust, to re-route combustion-engine traffic. Obtaining such bounds is challenging, because in almost any real-world application, information about the region of interest is incomplete, e.g., the database of sensor data contains only a limited number of samples. Existing approaches cannot provide upper and lower bounds or depend on restrictive assumptions, e.g., the area must be convex. Our approach in turn is based on the natural assumption that it is possible to specify a minimal diameter for the feature in question. Given this assumption, we formally derive bounds on the area size, and we provide algorithms that compute these bounds from a database of sensor data, based on geometrical considerations. We evaluate our algorithms both with a real-world case study and with synthetic data.}
    }
0 citations in 2017

 Open Access 

Causal Consistent Databases

Mawahib Musa Elbushra, Jan Lindström

Open Journal of Databases (OJDB), 2(1), Pages 17-35, 2015, Downloads: 15919, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194619 | GNL-LP: 1132360870 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Many consistency criteria have been considered in databases and the causal consistency is one of them. The causal consistency model has gained much attention in recent years because it provides ordering of relative operations. The causal consistency requires that all writes, which are potentially causally related, must be seen in the same order by all processes. The causal consistency is a weaker criteria than the sequential consistency, because there exists an execution, which is causally consistent but not sequentially consistent, however all executions satisfying the sequential consistency are also causally consistent. Furthermore, the causal consistency supports non-blocking operations; i.e. processes may complete read or write operations without waiting for global computation. Therefore, the causal consistency overcomes the primary limit of stronger criteria: communication latency. Additionally, several application semantics are precisely captured by the causal consistency, e.g. collaborative tools. In this paper, we review the state-of-the-art of causal consistent databases, discuss the features, functionalities and applications of the causal consistency model, and systematically compare it with other consistency models. We also discuss the implementation of causal consistency databases and identify limitations of the causal consistency model.

BibTex:

    @Article{OJDB_2015v2i1n02_Elbushra,
        title     = {Causal Consistent Databases},
        author    = {Mawahib Musa Elbushra and
                     Jan Lindstr\~{A}m},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {17--35},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194619},
        urn       = {urn:nbn:de:101:1-201705194619},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Many consistency criteria have been considered in databases and the causal consistency is one of them. The causal consistency model has gained much attention in recent years because it provides ordering of relative operations. The causal consistency requires that all writes, which are potentially causally related, must be seen in the same order by all processes. The causal consistency is a weaker criteria than the sequential consistency, because there exists an execution, which is causally consistent but not sequentially consistent, however all executions satisfying the sequential consistency are also causally consistent. Furthermore, the causal consistency supports non-blocking operations; i.e. processes may complete read or write operations without waiting for global computation. Therefore, the causal consistency overcomes the primary limit of stronger criteria: communication latency. Additionally, several application semantics are precisely captured by the causal consistency, e.g. collaborative tools. In this paper, we review the state-of-the-art of causal consistent databases, discuss the features, functionalities and applications of the causal consistency model, and systematically compare it with other consistency models. We also discuss the implementation of causal consistency databases and identify limitations of the causal consistency model.}
    }
1 citation in 2017:

An NVM Aware MariaDB Database System and Associated IO Workload on File Systems

Jan Lindström, Dhananjoy Das, Nick Piggin, Santhosh Konundinya, Torben Mathiasen, Nisha Talagala, Dulcardo Arteaga

Open Journal of Databases (OJDB), 4(1), Pages 1-21, 2017.

 Open Access 

An Analytical Model of Multi-Core Multi-Cluster Architecture (MCMCA)

Norhazlina Hamid, Robert John Walters, Gary B. Wills

Open Journal of Cloud Computing (OJCC), 2(1), Pages 4-15, 2015, Downloads: 11621, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194487 | GNL-LP: 1132360692 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Multi-core clusters have emerged as an important contribution in computing technology for provisioning additional processing power in high performance computing and communications. Multi-core architectures are proposed for their capability to provide higher performance without increasing heat and power usage, which is the main concern in a single-core processor. This paper introduces analytical models of a new architecture for large-scale multi-core clusters to improve the communication performance within the interconnection network. The new architecture will be based on a multi - cluster architecture containing clusters of multi-core processors.

BibTex:

    @Article{OJCC_2015v2i1n02_Hamid,
        title     = {An Analytical Model of Multi-Core Multi-Cluster Architecture (MCMCA)},
        author    = {Norhazlina Hamid and
                     Robert John Walters and
                     Gary B. Wills},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {4--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194487},
        urn       = {urn:nbn:de:101:1-201705194487},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Multi-core clusters have emerged as an important contribution in computing technology for provisioning additional processing power in high performance computing and communications. Multi-core architectures are proposed for their capability to provide higher performance without increasing heat and power usage, which is the main concern in a single-core processor. This paper introduces analytical models of a new architecture for large-scale multi-core clusters to improve the communication performance within the interconnection network. The new architecture will be based on a multi - cluster architecture containing clusters of multi-core processors.}
    }
0 citation in 2017

 Open Access 

Using Nuisance Telephone Denial of Service to Combat Online Sex Trafficking

Ross A. Malaga

Open Journal of Information Systems (OJIS), 2(1), Pages 1-8, 2015, Downloads: 11384, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194736 | GNL-LP: 1132361036 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Over the past few years, sex trafficking has been linked to online classified ads sites such as Craigslist.com and Backpage.com. However, to date technology-based solutions have not been used to attack classified ad sites or the advertisers. This paper proposes and tests a new approach to combating online sex trafficking promulgated via online classified ad sites - nuisance telephone denial of service (TDoS) attacks on the advertisers. The method of attack is described and implications are discussed.

BibTex:

    @Article{OJIS_2015v2i1n01_Malaga,
        title     = {Using Nuisance Telephone Denial of Service to Combat Online Sex Trafficking},
        author    = {Ross A. Malaga},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {1--8},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194736},
        urn       = {urn:nbn:de:101:1-201705194736},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Over the past few years, sex trafficking has been linked to online classified ads sites such as Craigslist.com and Backpage.com. However, to date technology-based solutions have not been used to attack classified ad sites or the advertisers. This paper proposes and tests a new approach to combating online sex trafficking promulgated via online classified ad sites - nuisance telephone denial of service (TDoS) attacks on the advertisers. The method of attack is described and implications are discussed.}
    }
0 citation in 2017

 Open Access 

Modelling the Integrated QoS for Wireless Sensor Networks with Heterogeneous Data Traffic

Syarifah Ezdiani, Adnan Al-Anbuky

Open Journal of Internet Of Things (OJIOT), 1(1), Pages 1-15, 2015, Downloads: 12665, Citations: 17

Full-Text: pdf | URN: urn:nbn:de:101:1-201704244946 | GNL-LP: 1130621979 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The future of Internet of Things (IoT) is envisaged to consist of a high amount of wireless resource-constrained devices connected to the Internet. Moreover, a lot of novel real-world services offered by IoT devices are realized by wireless sensor networks (WSNs). Integrating WSN to the Internet has therefore brought forward the requirements of an end-to-end quality of service (QoS) guarantee. In this paper, the QoS requirements for the WSN-Internet integration are investigated by first distinguishing the Internet QoS from the WSN QoS. Next, this study emphasizes on WSN applications that involve traffic with different levels of importance, thus the way realtime traffic and delay-tolerant traffic are handled to guarantee QoS in the network is studied. Additionally, an overview of the integration strategies is given, and the delay-tolerant network (DTN) gateway, being one of the desirable approaches for integrating WSNs to the Internet, is discussed. Next, the implementation of the service model is presented, by considering both traffic prioritization and service differentiation. Based on the simulation results in OPNET Modeler, it is observed that real-time traffic achieve low bound delay while delay-tolerant traffic experience a lower packet dropped, hence indicating that the needs of real-time and delay-tolerant traffic can be better met by treating both packet types differently. Furthermore, a vehicular network is used as an example case to describe the applicability of the framework in a real IoT application environment, followed by a discussion on the future work of this research.

BibTex:

    @Article{OJIOT_2015v1i1n02_Syarifah,
        title     = {Modelling the Integrated QoS for Wireless Sensor Networks with Heterogeneous Data Traffic},
        author    = {Syarifah Ezdiani and
                     Adnan Al-Anbuky},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2015},
        volume    = {1},
        number    = {1},
        pages     = {1--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244946},
        urn       = {urn:nbn:de:101:1-201704244946},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The future of Internet of Things (IoT) is envisaged to consist of a high amount of wireless resource-constrained devices connected to the Internet. Moreover, a lot of novel real-world services offered by IoT devices are realized by wireless sensor networks (WSNs). Integrating WSN to the Internet has therefore brought forward the requirements of an end-to-end quality of service (QoS) guarantee. In this paper, the QoS requirements for the WSN-Internet integration are investigated by first distinguishing the Internet QoS from the WSN QoS. Next, this study emphasizes on WSN applications that involve traffic with different levels of importance, thus the way realtime traffic and delay-tolerant traffic are handled to guarantee QoS in the network is studied. Additionally, an overview of the integration strategies is given, and the delay-tolerant network (DTN) gateway, being one of the desirable approaches for integrating WSNs to the Internet, is discussed. Next, the implementation of the service model is presented, by considering both traffic prioritization and service differentiation. Based on the simulation results in OPNET Modeler, it is observed that real-time traffic achieve low bound delay while delay-tolerant traffic experience a lower packet dropped, hence indicating that the needs of real-time and delay-tolerant traffic can be better met by treating both packet types differently. Furthermore, a vehicular network is used as an example case to describe the applicability of the framework in a real IoT application environment, followed by a discussion on the future work of this research.}
    }
5 citations in 2017:

Cross Layered Network Condition Aware Mobile-Wireless Multimedia Sensor Network Routing Protocol for Mission Critical Communication

Ajina A, Mydhili K. Nair

International Journal of Communication Networks and Information Security (IJCNIS), 9(1), 2017.

Cross layer architecture based mobile WSN routing protocol for inter-vehicular communication

Sangappa, Sanjeev Gupta, C. Keshavamurthy

In 3rd International Conference on Computational Intelligence Communication Technology (CICT), Pages 1-7, 2017.

Cross-layer energy efficient protocol for QoS provisioning in wireless sensor network

Amal Bourmada, Azeddine Bilami

International Journal of Systems, Control and Communications, 8(3), Pages 230-249, 2017.

Vers une approche cross layer pour le support de la QoS dans les Réseaux de capteurs sans fil

Amal Bourmada

Pages 1-119, 2017. Doctoral thesis, Université de Batna 2

Exploring 6LoWPAN Based Intelligent Transport System for Surveillance

Praful D. Bahe, Disha A. Rajgure, Chetana D. Kaurati

International Journal of Electronics,Communication and Soft Computing Science and Engineering (IJECSCSE), 2017. Special Issue: IETE Zonal Seminar "Recent Trends in Engineering &Technology"

 Open Access 

The Potential of Printed Electronics and Personal Fabrication in Driving the Internet of Things

Paulo Rosa, António Câmara, Cristina Gouveia

Open Journal of Internet Of Things (OJIOT), 1(1), Pages 16-36, 2015, Downloads: 14530, Citations: 43

Full-Text: pdf | URN: urn:nbn:de:101:1-201704244933 | GNL-LP: 1130621448 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In the early nineties, Mark Weiser, a chief scientist at the Xerox Palo Alto Research Center (PARC), wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. Within this vision, computers and others digital technologies are integrated seamlessly into everyday objects and activities, hidden from our senses whenever not used or needed. An important facet of this vision is the interconnectivity of the various physical devices, which creates an Internet of Things. With the advent of Printed Electronics, new ways to link the physical and digital worlds became available. Common printing technologies, such as screen, flexography, and inkjet printing, are now starting to be used not only to mass-produce extremely thin, flexible and cost effective electronic circuits, but also to introduce electronic functionality into objects where it was previously unavailable. In turn, the growing accessibility to Personal Fabrication tools is leading to the democratization of the creation of technology by enabling end-users to design and produce their own material goods according to their needs. This paper presents a survey of commonly used technologies and foreseen applications in the field of Printed Electronics and Personal Fabrication, with emphasis on the potential to drive the Internet of Things.

BibTex:

    @Article{OJIOT_2015v1i1n03_Rosa,
        title     = {The Potential of Printed Electronics and Personal Fabrication in Driving the Internet of Things},
        author    = {Paulo Rosa and
                     Ant\~{A}nio C\~{A}mara and
                     Cristina Gouveia},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2015},
        volume    = {1},
        number    = {1},
        pages     = {16--36},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244933},
        urn       = {urn:nbn:de:101:1-201704244933},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In the early nineties, Mark Weiser, a chief scientist at the Xerox Palo Alto Research Center (PARC), wrote a series of seminal papers that introduced the concept of Ubiquitous Computing. Within this vision, computers and others digital technologies are integrated seamlessly into everyday objects and activities, hidden from our senses whenever not used or needed. An important facet of this vision is the interconnectivity of the various physical devices, which creates an Internet of Things. With the advent of Printed Electronics, new ways to link the physical and digital worlds became available. Common printing technologies, such as screen, flexography, and inkjet printing, are now starting to be used not only to mass-produce extremely thin, flexible and cost effective electronic circuits, but also to introduce electronic functionality into objects where it was previously unavailable. In turn, the growing accessibility to Personal Fabrication tools is leading to the democratization of the creation of technology by enabling end-users to design and produce their own material goods according to their needs. This paper presents a survey of commonly used technologies and foreseen applications in the field of Printed Electronics and Personal Fabrication, with emphasis on the potential to drive the Internet of Things.}
    }
8 citations in 2017:

Selective metallization based on laser direct writing and additive metallization process

Akira Watanabea, Jinguang Caia

In Proc. of SPIE Vol., Laser-based Micro- and Nanoprocessing XI, 2017.

Electrolyte-Gated FETs Based on Oxide Semiconductors: Fabrication and Modeling

Gabriel Cadilha Marques, Suresh Kumar Garlapati, Debaditya Chatterjee, Simone Dehm, Subho Dasgupta, Jasmin Aghassi, Mehdi B. Tahoori

IEEE Transactions on Electron Devices, 64(1), Pages 279-285, 2017.

Development of sensing systems printed with conductive ink on gear surfaces: manufacturing of meander line antenna by laser-sintered silver nano-particles

D. Iba, S. Futagawa, T. Kamimoto, N. Miura, M. Nakamura, T. Iizuka, A. Masuda, A. Sone, I. Moriwaki

In Proc. SPIE 10168, Sensors and Smart Structures Technologies for Civil, Mechanical, and Aerospace Systems, 2017.

Inkjet-Printing: A New Fabrication Technology for Organic Transistors

Giorgio Mattana, Alberto Loi, Marion Woytasik, Massimo Barbaro, Vincent Noël, Benoît Piro

Advanced Materials Technologies, 2(10), 2017.

High-Speed Non-Contact Sheet Resistivity Monitoring of Printed Electronics using Inductive Sensors

Adam P. Lewis, Chris Hunt, Owen Thomas, Martin Wickham

Flexible and Printed Electronics, 2017.

Selective metallization based on laser direct writing and additive metallization process

Akira Watanabe, Jinguang Cai

In Laser-based Micro-and Nanoprocessing XI, 2017.

POM製平歯車側面への導電性インクの印刷によるき裂検知センサの製作

二川 真太郎, 射場 大輔, 神本 貴裕, 中村 守正, 三浦 奈々子, 飯塚 高志, 増田 新, 曽根 彰, 森脇 一郎

年次大会, 2017.

Laser direct writing of reduced graphene oxide microelectrodes and the device application

Akira Watanabe, Jinguang Cai

International Congress on Applications of Lasers & Electro-Optics, 2017(1), 2017.

 Open Access 

IT Governance Practices for Electric Utilities: Insights from Brazil and Europe

Paulo Rupino da Cunha, Luiz Mauricio Martins, Antão Moura, António Dias de Figueiredo

Open Journal of Information Systems (OJIS), 2(1), Pages 9-28, 2015, Downloads: 11242, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194743 | GNL-LP: 1132361044 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: We propose a framework of 14 IT governance practices tailored for the electric utilities sector. They were selected and ranked as "essential", "important", or "good" by top executives and IT staff from two multi-billion dollar companies - one in Brazil and another in Europe - from a generic set of 83 collected in the literature and in the field. Our framework addresses a need of electric utilities for which specific guidance was lacking. We have also uncovered a significant impact of social issues in IT governance, whose depth seems to be missing in the current research. As a byproduct of our work, the larger generic framework from which we have departed and the tailoring method that we have proposed can be used to customize the generic framework to different industries.

BibTex:

    @Article{OJIS_2015v2i1n02_Cunha,
        title     = {IT Governance Practices for Electric Utilities: Insights from Brazil and Europe},
        author    = {Paulo Rupino da Cunha and
                     Luiz Mauricio Martins and
                     Ant\~{A}o Moura and
                     Ant\~{A}nio Dias de Figueiredo},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {9--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194743},
        urn       = {urn:nbn:de:101:1-201705194743},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {We propose a framework of 14 IT governance practices tailored for the electric utilities sector. They were selected and ranked as "essential", "important", or "good" by top executives and IT staff from two multi-billion dollar companies - one in Brazil and another in Europe - from a generic set of 83 collected in the literature and in the field. Our framework addresses a need of electric utilities for which specific guidance was lacking. We have also uncovered a significant impact of social issues in IT governance, whose depth seems to be missing in the current research. As a byproduct of our work, the larger generic framework from which we have departed and the tailoring method that we have proposed can be used to customize the generic framework to different industries.}
    }
0 citation in 2017

 Open Access 

Relationship between Externalized Knowledge and Evaluation in the Process of Creating Strategic Scenarios

Teruaki Hayashi, Yukio Ohsawa

Open Journal of Information Systems (OJIS), 2(1), Pages 29-40, 2015, Downloads: 6687, Citations: 10

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194751 | GNL-LP: 1132361079 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Social systems are changing so rapidly that it is important for humans to make decisions considering uncertainty. A scenario is information about the series of events/actions, which supports decision makers to take actions and reduce risks. We propose Action Planning for refining simple ideas into practical scenarios (strategic scenarios). Frameworks and items on Action Planning Sheets provide participants with organized constraints, to lead to creative and logical thinking for solving real issues in businesses or daily life. Communication among participants who have preset roles leads the externalization of knowledge. In this study, we set three criteria for evaluating strategic scenarios; novelty, utility, and feasibility, and examine the relationship between externalized knowledge and the evaluation values, in order to consider factors which affect the evaluations. Regarding a word contained in roles and scenarios as the smallest unit of knowledge, we calculate Relativeness between roles and scenarios. The results of our experiment suggest that the lower the relativeness of a strategic scenario, the higher the strategic scenario is evaluated in novelty. In addition, in the evaluation of utility, a scenario satisfying a covert requirement tends to be estimated higher. Moreover, we found the externalization of stakeholders may affect the realization of strategic scenarios.

BibTex:

    @Article{OJIS_2015v2i1n03_Hayashi,
        title     = {Relationship between Externalized Knowledge and Evaluation in the Process of Creating Strategic Scenarios},
        author    = {Teruaki Hayashi and
                     Yukio Ohsawa},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {29--40},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194751},
        urn       = {urn:nbn:de:101:1-201705194751},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Social systems are changing so rapidly that it is important for humans to make decisions considering uncertainty. A scenario is information about the series of events/actions, which supports decision makers to take actions and reduce risks. We propose Action Planning for refining simple ideas into practical scenarios (strategic scenarios). Frameworks and items on Action Planning Sheets provide participants with organized constraints, to lead to creative and logical thinking for solving real issues in businesses or daily life. Communication among participants who have preset roles leads the externalization of knowledge. In this study, we set three criteria for evaluating strategic scenarios; novelty, utility, and feasibility, and examine the relationship between externalized knowledge and the evaluation values, in order to consider factors which affect the evaluations. Regarding a word contained in roles and scenarios as the smallest unit of knowledge, we calculate Relativeness between roles and scenarios. The results of our experiment suggest that the lower the relativeness of a strategic scenario, the higher the strategic scenario is evaluated in novelty. In addition, in the evaluation of utility, a scenario satisfying a covert requirement tends to be estimated higher. Moreover, we found the externalization of stakeholders may affect the realization of strategic scenarios.}
    }
2 citations in 2017:

The Role of Externalization in the Creation of Management Knowledge

Janaynna Ferraz, Débora Eleonora Pereira da Silva, Jefferson David Araujo Sales, Rafael Lucian

In XX SemeAd, Seminários em Administração, Pages 1-15, 2017.

データ市場における知識構造化に基づくデータ利活用シナリオ検討プロセスの研究

早矢仕晃章

2017.

 Open Access 

Achieving Business Practicability of Model-Driven Cross-Platform Apps

Tim A. Majchrzak, Jan Ernsting, Herbert Kuchen

Open Journal of Information Systems (OJIS), 2(2), Pages 4-15, 2015, Downloads: 10661

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194768 | GNL-LP: 1132361095 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Due to the incompatibility of mobile device platforms such as Android and iOS, apps have to be developed separately for each target platform. Cross-platform development approaches based on Web technology have significantly improved over the last years. However, since they do not lead to native apps, these frameworks are not feasible for all kinds of business apps. Moreover, the way apps are developed is cumbersome. Advanced cross-platform approaches such as MD2, which is based on model-driven development (MDSD) techniques, are a much more powerful yet less mature choice. We discuss business implications of MDSD for apps and introduce MD2 as our proposed solution to fulfill typical requirements. Moreover, we highlight a business-oriented enhancement that further increases MD2's business practicability. We generalize our findings and sketch the path towards more versatile MDSD of apps.

BibTex:

    @Article{OJIS_2015v2i2n02_Majchrzak,
        title     = {Achieving Business Practicability of Model-Driven Cross-Platform Apps},
        author    = {Tim A. Majchrzak and
                     Jan Ernsting and
                     Herbert Kuchen},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {4--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194768},
        urn       = {urn:nbn:de:101:1-201705194768},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Due to the incompatibility of mobile device platforms such as Android and iOS, apps have to be developed separately for each target platform. Cross-platform development approaches based on Web technology have significantly improved over the last years. However, since they do not lead to native apps, these frameworks are not feasible for all kinds of business apps. Moreover, the way apps are developed is cumbersome. Advanced cross-platform approaches such as MD2, which is based on model-driven development (MDSD) techniques, are a much more powerful yet less mature choice. We discuss business implications of MDSD for apps and introduce MD2 as our proposed solution to fulfill typical requirements. Moreover, we highlight a business-oriented enhancement that further increases MD2's business practicability. We generalize our findings and sketch the path towards more versatile MDSD of apps.}
    }
0 citations in 2017

 Open Access 

Concept Design for Creating Essential Hypothesis, Rules, and Goals: Toward a Data Marketplace

Jun Nakamura, Masahiko Teramoto

Open Journal of Information Systems (OJIS), 2(2), Pages 16-26, 2015, Downloads: 7327, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194774 | GNL-LP: 1132361117 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The abductive reasoning model has been discussed in the context of business strategy. However, this model seems unrealistic for applications in the real business world considering the unpredictable, competitive business environment. This study improves the model by formulating an experimental case study through a web-based workplace for generating product ideas. We discuss the possible embodiment of product ideas as the basis for configuring features through the use of dynamic quality function deployment. The entire concept design process is proposed as a blueprint for building a data marketplace.

BibTex:

    @Article{OJIS_2015v2i2n03_Nakamura,
        title     = {Concept Design for Creating Essential Hypothesis, Rules, and Goals: Toward a Data Marketplace},
        author    = {Jun Nakamura and
                     Masahiko Teramoto},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {16--26},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194774},
        urn       = {urn:nbn:de:101:1-201705194774},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The abductive reasoning model has been discussed in the context of business strategy. However, this model seems unrealistic for applications in the real business world considering the unpredictable, competitive business environment. This study improves the model by formulating an experimental case study through a web-based workplace for generating product ideas. We discuss the possible embodiment of product ideas as the basis for configuring features through the use of dynamic quality function deployment. The entire concept design process is proposed as a blueprint for building a data marketplace.}
    }
0 citation in 2017

 Open Access 

An Efficient Approach for Cost Optimization of the Movement of Big Data

Prasad Teli, Manoj V. Thomas, K. Chandrasekaran

Open Journal of Big Data (OJBD), 1(1), Pages 4-15, 2015, Downloads: 11201, Citations: 11

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194335 | GNL-LP: 113236048X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.

BibTex:

    @Article{OJBD_2015v1i1n02_Teli,
        title     = {An Efficient Approach for Cost Optimization of the Movement of Big Data},
        author    = {Prasad Teli and
                     Manoj V. Thomas and
                     K. Chandrasekaran},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {1},
        pages     = {4--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194335},
        urn       = {urn:nbn:de:101:1-201705194335},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {With the emergence of cloud computing, Big Data has caught the attention of many researchers in the area of cloud computing. As the Volume, Velocity and Variety (3 Vs) of big data are growing exponentially, dealing with them is a big challenge, especially in the cloud environment. Looking at the current trend of the IT sector, cloud computing is mainly used by the service providers to host their applications. A lot of research has been done to improve the network utilization of WAN (Wide Area Network) and it has achieved considerable success over the traditional LAN (Local Area Network) techniques. While dealing with this issue, the major questions of data movement such as from where to where this big data will be moved and also how the data will be moved, have been overlooked. As various applications generating the big data are hosted in geographically distributed data centers, they individually collect large volume of data in the form of application data as well as the logs. This paper mainly focuses on the challenge of moving big data from one data center to other. We provide an efficient algorithm for the optimization of cost in the movement of the big data from one data center to another for offline environment. This approach uses the graph model for data centers in the cloud and results show that the adopted mechanism provides a better solution to minimize the cost for data movement.}
    }
1 citation in 2017:

Efficient placement design and storage cost saving for big data workflow in cloud datacenters

Sonia Ikken

2017. PhD Thesis jointly at Télécom Sud Paris and Pierre and Marie Curie University

 Open Access 

Cognitive Spam Recognition Using Hadoop and Multicast-Update

Mukund YR, Sunil Sandeep Nayak, K. Chandrasekaran

Open Journal of Big Data (OJBD), 1(1), Pages 16-28, 2015, Downloads: 10469, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194340 | GNL-LP: 1132360498 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In today's world of exponentially growing technology, spam is a very common issue faced by users on the internet. Spam not only hinders the performance of a network, but it also wastes space and time, and causes general irritation and presents a multitude of dangers - of viruses, malware, spyware and consequent system failure, identity theft, and other cyber criminal activity. In this context, cognition provides us with a method to help improve the performance of the distributed system. It enables the system to learn what it is supposed to do for different input types as different classifications are made over time and this learning helps it increase its accuracy as time passes. Each system on its own can only do so much learning, because of the limited sample set of inputs that it gets to process. However, in a network, we can make sure that every system knows the different kinds of inputs available and learns what it is supposed to do with a better success rate. Thus, distribution and combination of this cognition across different components of the network leads to an overall improvement in the performance of the system. In this paper, we describe a method to make machines cognitively label spam using Machine Learning and the Naive Bayesian approach. We also present two possible methods of implementation - using a MapReduce Framework (hadoop), and also using messages coupled with a multicast-send based network - with their own subtypes, and the pros and cons of each. We finally present a comparative analysis of the two main methods and provide a basic idea about the usefulness of the two in various different scenarios.

BibTex:

    @Article{OJBD_2015v1i1n03_YR,
        title     = {Cognitive Spam Recognition Using Hadoop and Multicast-Update},
        author    = {Mukund YR and
                     Sunil Sandeep Nayak and
                     K. Chandrasekaran},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {1},
        pages     = {16--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194340},
        urn       = {urn:nbn:de:101:1-201705194340},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In today's world of exponentially growing technology, spam is a very common issue faced by users on the internet. Spam not only hinders the performance of a network, but it also wastes space and time, and causes general irritation and presents a multitude of dangers - of viruses, malware, spyware and consequent system failure, identity theft, and other cyber criminal activity. In this context, cognition provides us with a method to help improve the performance of the distributed system. It enables the system to learn what it is supposed to do for different input types as different classifications are made over time and this learning helps it increase its accuracy as time passes. Each system on its own can only do so much learning, because of the limited sample set of inputs that it gets to process. However, in a network, we can make sure that every system knows the different kinds of inputs available and learns what it is supposed to do with a better success rate. Thus, distribution and combination of this cognition across different components of the network leads to an overall improvement in the performance of the system. In this paper, we describe a method to make machines cognitively label spam using Machine Learning and the Naive Bayesian approach. We also present two possible methods of implementation - using a MapReduce Framework (hadoop), and also using messages coupled with a multicast-send based network - with their own subtypes, and the pros and cons of each. We finally present a comparative analysis of the two main methods and provide a basic idea about the usefulness of the two in various different scenarios.}
    }
0 citation in 2017

 Open Access 

Evidential Sensor Data Fusion in a Smart City Environment

Aditya Gaur, Bryan W. Scotney, Gerard P. Parr, Sally I. McClean

Open Journal of Internet Of Things (OJIOT), 1(2), Pages 1-18, 2015, Downloads: 13313, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201704244969 | GNL-LP: 113062319X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Wireless sensor networks have increasingly become contributors of very large amounts of data. The recent deployment of wireless sensor networks in Smart City infrastructures have led to very large amounts of data being generated each day across a variety of domains, with applications including environmental monitoring, healthcare monitoring and transport monitoring. The information generated through the wireless sensor nodes has made possible the visualization of a Smart City environment for better living. The Smart City offers intelligent infrastructure and cogitative environment for the elderly and other people living in the Smart society. Different types of sensors are present that help in monitoring inhabitants' behaviour and their interaction with real world objects. To take advantage of the increasing amounts of data, there is a need for new methods and techniques for effective data management and analysis, to generate information that can assist in managing the resources intelligently and dynamically. Through this research a Smart City ontology model is proposed, which addresses the fusion process related to uncertain sensor data using semantic web technologies and Dempster-Shafer uncertainty theory. Based on the information handling methods, such as Dempster-Shafer theory (DST), an equally weighted sum operator and maximization operation, a higher level of contextual information is inferred from the low-level sensor data fusion process. In addition, the proposed ontology model helps in learning new rules that can be used in defining new knowledge in the Smart City system.

BibTex:

    @Article{OJIOT_2015v1i2n02_Gaur,
        title     = {Evidential Sensor Data Fusion in a Smart City Environment},
        author    = {Aditya Gaur and
                     Bryan W. Scotney and
                     Gerard P. Parr and
                     Sally I. McClean},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {1--18},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244969},
        urn       = {urn:nbn:de:101:1-201704244969},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Wireless sensor networks have increasingly become contributors of very large amounts of data. The recent deployment of wireless sensor networks in Smart City infrastructures have led to very large amounts of data being generated each day across a variety of domains, with applications including environmental monitoring, healthcare monitoring and transport monitoring. The information generated through the wireless sensor nodes has made possible the visualization of a Smart City environment for better living. The Smart City offers intelligent infrastructure and cogitative environment for the elderly and other people living in the Smart society. Different types of sensors are present that help in monitoring inhabitants' behaviour and their interaction with real world objects. To take advantage of the increasing amounts of data, there is a need for new methods and techniques for effective data management and analysis, to generate information that can assist in managing the resources intelligently and dynamically. Through this research a Smart City ontology model is proposed, which addresses the fusion process related to uncertain sensor data using semantic web technologies and Dempster-Shafer uncertainty theory. Based on the information handling methods, such as Dempster-Shafer theory (DST), an equally weighted sum operator and maximization operation, a higher level of contextual information is inferred from the low-level sensor data fusion process. In addition, the proposed ontology model helps in learning new rules that can be used in defining new knowledge in the Smart City system.}
    }
0 citation in 2017

 Open Access 

Model of Creative Thinking Process on Analysis of Handwriting by Digital Pen

Kenshin Ikegami, Yukio Ohsawa

Open Journal of Information Systems (OJIS), 2(2), Pages 27-39, 2015, Downloads: 6215, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194781 | GNL-LP: 1132361125 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In order to perceive infrequent events as hints for new ideas, it is desired to know and model the process of creating and refining ideas. In this paper, we address this modeling problem experimentally. Firstly, we focus on the relation between thinking time and writing time in handwriting. We observe two types of patterns; one group takes longer time in thinking and shorter in writing, the other takes longer in writing and shorter in thinking. The group having spends longer in writing has shorter time span from one sentence to another than the other group. Backtracking, i.e., the event that participants return back to their former sheet and modify opinions, is observed more often in the group of longer writing than the other group. In addition, participants in this backtracking group gets higher scores for their ideas on sheets than those in the no-backtracking group. We propose a model of creative thinking by applying Operations of Structure of Intellect. It is inferred that the group of longer writing conducts a series of thinking flow, including divergent thinking, convergent thinking and evaluation. In contrast, the group of longer thinking tends to conduct the two different thinking flow: divergent thinking and evaluation; convergent thinking and evaluation. For making creative ideas, we conduct divergent thinking without evaluation and created a large number of ideas. We conclude that the rotations of divergent thinking, convergent thinking and evaluation increase the frequency of "backtracking" and make the ideas more logical ones.

BibTex:

    @Article{OJIS_2015v2i2n04_Ikegami,
        title     = {Model of Creative Thinking Process on Analysis of Handwriting by Digital Pen},
        author    = {Kenshin Ikegami and
                     Yukio Ohsawa},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {27--39},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194781},
        urn       = {urn:nbn:de:101:1-201705194781},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In order to perceive infrequent events as hints for new ideas, it is desired to know and model the process of creating and refining ideas. In this paper, we address this modeling problem experimentally. Firstly, we focus on the relation between thinking time and writing time in handwriting. We observe two types of patterns; one group takes longer time in thinking and shorter in writing, the other takes longer in writing and shorter in thinking. The group having spends longer in writing has shorter time span from one sentence to another than the other group. Backtracking, i.e., the event that  participants return back to their former sheet and modify opinions, is observed more often in the group of longer writing than the other group. In addition, participants in this backtracking group gets higher scores for their ideas on sheets than those in the no-backtracking group. We propose a model of creative thinking by applying Operations of Structure of Intellect. It is inferred that the group of longer writing conducts a series of thinking flow, including divergent thinking, convergent thinking and evaluation. In contrast, the group of longer thinking tends to conduct the two different thinking flow: divergent thinking and evaluation; convergent thinking and evaluation. For making creative ideas, we conduct divergent thinking without evaluation and created a large number of ideas. We conclude that the rotations of divergent thinking, convergent thinking and evaluation increase the frequency of "backtracking" and make the ideas more logical ones.}
    }
0 citation in 2017

 Open Access 

Accurate Distance Estimation between Things: A Self-correcting Approach

Ho-sik Cho, Jianxun Ji, Zili Chen, Hyuncheol Park, Wonsuk Lee

Open Journal of Internet Of Things (OJIOT), 1(2), Pages 19-27, 2015, Downloads: 26621, Citations: 15

Full-Text: pdf | URN: urn:nbn:de:101:1-201704244959 | GNL-LP: 1130622525 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper suggests a method to measure the physical distance between an IoT device (a Thing) and a mobile device (also a Thing) using BLE (Bluetooth Low-Energy profile) interfaces with smaller distance errors. BLE is a well-known technology for the low-power connectivity and suitable for IoT devices as well as for the proximity with the range of several meters. Apple has already adopted the technique and enhanced it to provide subdivided proximity range levels. However, as it is also a variation of RSS-based distance estimation, Apple's iBeacon could only provide immediate, near or far status but not a real and accurate distance. To provide more accurate distance using BLE, this paper introduces additional self-correcting beacon to calibrate the reference distance and mitigate errors from environmental factors. By adopting self-correcting beacon for measuring the distance, the average distance error shows less than 10% within the range of 1.5 meters. Some considerations are presented to extend the range to be able to get more accurate distances.

BibTex:

    @Article{OJIOT_2015v1i2n03_Cho,
        title     = {Accurate Distance Estimation between Things: A Self-correcting Approach},
        author    = {Ho-sik Cho and
                     Jianxun Ji and
                     Zili Chen and
                     Hyuncheol Park and
                     Wonsuk Lee},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {19--27},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244959},
        urn       = {urn:nbn:de:101:1-201704244959},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper suggests a method to measure the physical distance between an IoT device (a Thing) and a mobile device (also a Thing) using BLE (Bluetooth Low-Energy profile) interfaces with smaller distance errors. BLE is a well-known technology for the low-power connectivity and suitable for IoT devices as well as for the proximity with the range of several meters. Apple has already adopted the technique and enhanced it to provide subdivided proximity range levels. However, as it is also a variation of RSS-based distance estimation, Apple's iBeacon could only provide immediate, near or far status but not a real and accurate distance. To provide more accurate distance using BLE, this paper introduces additional self-correcting beacon to calibrate the reference distance and mitigate errors from environmental factors. By adopting self-correcting beacon for measuring the distance, the average distance error shows less than 10\% within the range of 1.5 meters. Some considerations are presented to extend the range to be able to get more accurate distances.}
    }
4 citations in 2017:

Location-Aware Speakers for the Virtual Reality Environments

Chang Ha Lee

IEEE Access, 5, Pages 2636-2640, 2017.

Identification and distance estimation of users and objects by means of electronic beacons in social robotics

Fernando Alonso-Martín, Alvaro Castro-González, María Malfaz, José Carlos Castillo, Miguel A. Salichs

Expert Systems with Applications, 86, Pages 247 - 257, 2017.

A BLE RSSI ranking based indoor positioning system for generic smartphones

Zixiang Ma, Stefan Poslad, John Bigham, Xiaoshuai Zhang, Liang Men

In Wireless Telecommunications Symposium, WTS 2017, Chicago, IL, USA, Pages 1-8, 2017.

Improving BLE Distance Estimation and Classification Using TX Power and Machine Learning: A Comparative Analysis

Mimonah Al Qathrady, Ahmed Helmy

In Proceedings of the 20th ACM International Conference on Modelling, Analysis and Simulation of Wireless and Mobile Systems (MSWiM), Miami, Florida, USA, Pages 79-83, 2017.

 Open Access 

Emerging Software as a Service and Analytics

Victor Chang, Robert John Walters, Gary B. Wills

Open Journal of Cloud Computing (OJCC), 2(1), Pages 1-3, 2015, Downloads: 5176

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194461 | GNL-LP: 1132360641 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This special issue of Open Journal of Cloud Computing (OJCC) (www.ronpub.com/journals/ojcc) reports work in the field of emerging software as a service and analytics, and presents innovative approaches to delivering software services in research and enterprise communities. It contains extended versions of papers selected from the international workshop on Emerging Software as a Service and Analytices (ESaaSA) in association with the international conference on cloud computing and serviced science taken place in Barcelona, Spain during April 2014. OJCC is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.

BibTex:

    @Article{OJCC_2015v2i1n01e_Chang,
        title     = {Emerging Software as a Service and Analytics},
        author    = {Victor Chang and
                     Robert John Walters and
                     Gary B. Wills},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {1--3},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194461},
        urn       = {urn:nbn:de:101:1-201705194461},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This special issue of Open Journal of Cloud Computing (OJCC) (www.ronpub.com/journals/ojcc) reports work in the field of emerging software as a service and analytics, and presents innovative approaches to delivering software services in research and enterprise communities. It contains extended versions of papers selected from the international workshop on Emerging Software as a Service and Analytices (ESaaSA) in association with the international conference on cloud computing and serviced science taken place in Barcelona, Spain during April 2014. OJCC is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.}
    }
0 citations in 2017

 Open Access 

BEAUFORD: A Benchmark for Evaluation of Formalisation of Definitions in OWL

Cheikh Kacfah Emani, Catarina Ferreira Da Silva, Bruno Fiés, Parisa Ghodous

Open Journal of Semantic Web (OJSW), 2(1), Pages 4-15, 2015, Downloads: 5997, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194879 | GNL-LP: 1132361257 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this paper we present BEAUFORD, a benchmark for methods which aim to provide formal expressions of concepts using the natural language (NL) definition of these concepts. Adding formal expressions of concepts to a given ontology allows reasoners to infer more useful pieces of information or to detect inconsistencies in this given ontology. To the best of our knowledge, BEAUFORD is the first benchmark to tackle this ontology enrichment problem. BEAUFORD allows the breaking down of a given formalisation approach by identifying its key features. In addition, BEAUFORD provides strong mechanisms to evaluate efficiently an approach even in case of ambiguity which is a major challenge in formalisation of NL resources. Indeed, BEAUFORD takes into account the fact that a given NL phrase can be formalised in many ways. Hence, it proposes a suitable specification to represent these multiple formalisations. Taking advantage of this specification, BEAUFORD redefines classical precision and recall and introduces other metrics to take into account the fact that there is not only one unique way to formalise a definition. Finally, BEAUFORD comprises a well-suited dataset to concretely judge of the efficiency of methods of formalisation. Using BEAUFORD, current approaches of formalisation of definitions can be compared accurately using a suitable gold standard.

BibTex:

    @Article{OJSW_2015v2i1n02_Kachfah,
        title     = {BEAUFORD: A Benchmark for Evaluation of Formalisation of Definitions in OWL},
        author    = {Cheikh Kacfah Emani and
                     Catarina Ferreira Da Silva and
                     Bruno Fi\~{A}s and
                     Parisa Ghodous},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {4--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194879},
        urn       = {urn:nbn:de:101:1-201705194879},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this paper we present BEAUFORD, a benchmark for methods which aim to provide formal expressions of concepts using the natural language (NL) definition of these concepts. Adding formal expressions of concepts to a given ontology allows reasoners to infer more useful pieces of information or to detect inconsistencies in this given ontology. To the best of our knowledge, BEAUFORD is the first benchmark to tackle this ontology enrichment problem. BEAUFORD allows the breaking down of a given formalisation approach by identifying its key features. In addition, BEAUFORD provides strong mechanisms to evaluate efficiently an approach even in case of ambiguity which is a major challenge in formalisation of NL resources. Indeed, BEAUFORD takes into account the fact that a given NL phrase can be formalised in many ways. Hence, it proposes a suitable specification to represent these multiple formalisations. Taking advantage of this specification, BEAUFORD redefines classical precision and recall and introduces other metrics to take into account the fact that there is not only one unique way to formalise a definition. Finally, BEAUFORD comprises a well-suited dataset to concretely judge of the efficiency of methods of formalisation. Using BEAUFORD, current approaches of formalisation of definitions can be compared accurately using a suitable gold standard.}
    }
0 citation in 2017

 Open Access 

Big Data in the Cloud: A Survey

Pedro Caldeira Neves, Jorge Bernardino

Open Journal of Big Data (OJBD), 1(2), Pages 1-18, 2015, Downloads: 13079, Citations: 14

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194365 | GNL-LP: 1132360528 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.

BibTex:

    @Article{OJBD_2015v1i2n02_Neves,
        title     = {Big Data in the Cloud: A Survey},
        author    = {Pedro Caldeira Neves and
                     Jorge Bernardino},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {1--18},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194365},
        urn       = {urn:nbn:de:101:1-201705194365},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Big Data has become a hot topic across several business areas requiring the storage and processing of huge volumes of data. Cloud computing leverages Big Data by providing high storage and processing capabilities and enables corporations to consume resources in a pay-as-you-go model making clouds the optimal environment for storing and processing huge quantities of data. By using virtualized resources, Cloud can scale very easily, be highly available and provide massive storage capacity and processing power. This paper surveys existing databases models to store and process Big Data within a Cloud environment. Particularly, we detail the following traditional NoSQL databases: BigTable, Cassandra, DynamoDB, HBase, Hypertable, and MongoDB. The MapReduce framework and its developments Apache Spark, HaLoop, Twister, and other alternatives such as Apache Giraph, GraphLab, Pregel and MapD - a novel platform that uses GPU processing to accelerate Big Data processing - are also analyzed. Finally, we present two case studies that demonstrate the successful use of Big Data within Cloud environments and the challenges that must be addressed in the future.}
    }
4 citations in 2017:

Building an Undergraduate Course in Data-Driven Methodologies

Grigore Albeanu

In The International Scientific Conference eLearning and Software for Education, Pages 62, 2017.

Big-data NoSQL databases: A comparison and analysis of “Big-Table”, “DynamoDB”, and “Cassandra”

Sultana Kalid, Ali Syed, Azeem Mohammad, Malka N. Halgamuge

In IEEE 2nd International Conference on Big Data Analysis (ICBDA), Pages 89-93, 2017.

The Ability of Cloud Computing Performance Benchmarks to Measure Dependability

Eduardo Carvalho, Raul Barbosa, Jorge Bernardino

In Proceedings of the 12th International Conference on Software Technologies - Volume 1: ICSOFT,, Pages 447-452, 2017.

Service-Oriented Architecture for Big Data and Business Intelligence Analytics in the Cloud

Muthu Ramachandran

In Computational Intelligence Applications in Business and Big Data Analytics, 2017.

 Open Access 

Ontology Evolution Using Ontology Templates

Miroslav Blasko, Petr Kremen, Zdenek Kouba

Open Journal of Semantic Web (OJSW), 2(1), Pages 16-29, 2015, Downloads: 6274, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194898 | GNL-LP: 1132361281 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Evolving ontologies by domain experts is difficult and typically cannot be performed without the assistance of an ontology engineer. This process takes long time and often recurrent modeling errors have to be resolved. This paper proposes a technique for creating controlled ontology evolution scenarios that ensure consistency of the possible ontology evolution and give guarrantees to the domain expert that his/her updates do not cause inconsistency. We introduce ontology templates that formalize the notion of controlled evolution and define ontology template consistency checking service together with a consistency checking algorithm. We prove correctness and demonstate the practical use of the techniques in two scenarios.

BibTex:

    @Article{OJSW_2015v2i1n03_Blasko,
        title     = {Ontology Evolution Using Ontology Templates},
        author    = {Miroslav Blasko and
                     Petr Kremen and
                     Zdenek Kouba},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {16--29},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194898},
        urn       = {urn:nbn:de:101:1-201705194898},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Evolving ontologies by domain experts is difficult and typically cannot be performed without the assistance of an ontology engineer. This process takes long time and often recurrent modeling errors have to be resolved. This paper proposes a technique for creating controlled ontology evolution scenarios that ensure consistency of the possible ontology evolution and give guarrantees to the domain expert that his/her updates do not cause inconsistency. We introduce ontology templates that formalize the notion of controlled evolution and define ontology template consistency checking service together with a consistency checking algorithm. We prove correctness and demonstate the practical use of the techniques in two scenarios.}
    }
2 citations in 2017:

A Semantic Safety Check System for Emergency Management

Yogesh Pandey, Srividya K. Bansal

Open Journal of Semantic Web (OJSW), 4(1), Pages 35-50, 2017.

Ontology-Based Data Integration in Multi-Disciplinary Engineering Environments: A Review

Fajar J. Ekaputra, Marta Sabou, Estefanía Serral, Elmar Kiesling, Stefan Biffl

Open Journal of Information Systems (OJIS), 4(1), Pages 1-26, 2017.

 Open Access 

Designing the Market of Data - For Practical Data Sharing via Educational and Innovative Communications

Yukio Ohsawa, Akinori Abe

Open Journal of Information Systems (OJIS), 2(2), Pages 1-3, 2015, Downloads: 5045

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194720 | GNL-LP: 1132361001 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This special issue of Open Journal of Information Systems (OJIS) reports work on designing the market of data for practical data sharing via educational and innovative communications (MoDAT). In the market of data, data are reasonably dealt with sold, opened, or shared based on negotiation. Since last years, we have been aiming at realizing a social environment, where each person feels free to share one's own and others' data for learning the latent value of data without fearing the loss of business opportunities. In the market, data and analysts' knowledge are shared by selling and buying, with reasonably determining the conditions for sharing. People in the market may communicate with each other in order to decide to expose the data as open-source, if the trust of the data provider is expected to be elevated highly due to the contribution to people in the public. Thus the Market of Data means a place where the value of data and knowledge can be externalized. OJIS is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.

BibTex:

    @Article{OJIS_2015v1i2n01p_Ohsawa,
        title     = {Designing the Market of Data - For Practical Data Sharing via Educational and Innovative Communications},
        author    = {Yukio Ohsawa and
                     Akinori Abe},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {1--3},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194720},
        urn       = {urn:nbn:de:101:1-201705194720},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This special issue of Open Journal of Information Systems (OJIS) reports work on designing the market of data for practical data sharing via educational and innovative communications (MoDAT). In the market of data, data are reasonably dealt with sold, opened, or shared based on negotiation. Since last years, we have been aiming at realizing a social environment, where each person feels free to share one's own and others' data for learning the latent value of data without fearing the loss of business opportunities. In the market, data and analysts' knowledge are shared by selling and buying, with reasonably determining the conditions for sharing. People in the market may communicate with each other in order to decide to expose the data as open-source, if the trust of the data provider is expected to be elevated highly due to the contribution to people in the public. Thus the Market of Data means a place where the value of data and knowledge can be externalized. OJIS is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.}
    }
0 citations in 2017

 Open Access 

A Toulmin's Framework-Based Method for Design Argumentation of Cyber-Physical Systems

Noriyuki Kushiro, Ryoichi Torikai, Shodai Matsuda, Kunio Takahara

Open Journal of Information Systems (OJIS), 2(2), Pages 40-55, 2015, Downloads: 6014, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194809 | GNL-LP: 1132361141 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The design of cyber-physical systems (CPS) is a promising domain, where the data market is expected to soon penetrate. When engineers focus on only a particular part of data (whether intentionally or not) for establishing a design hypothesis, the design hypothesis may also be supported by data sets in the market. Therefore, the validity of such a design hypothesis cannot be evaluated by the data itself, and can only be accepted by the robustness of the logic behind the design argumentation. Although the validation of the design logic is significant, cognitive aspects (which people have spontaneously) disturb the design argumentation reasoning. Therefore, a design method that overcomes the cognitive aspects is indispensable for the CPS designers. This work proposes a CPS design method using the interaction between logic and data sets with a logic visualization tool, and applies the proposed method to the design of a diagnosis system for semiconductor manufacture. The capability of the proposed method is also discussed and analyzed in this paper.

BibTex:

    @Article{OJIS_2015v2i2n05_Kushiro,
        title     = {A Toulmin's Framework-Based Method for Design Argumentation of Cyber-Physical Systems},
        author    = {Noriyuki Kushiro and
                     Ryoichi Torikai and
                     Shodai Matsuda and
                     Kunio Takahara},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {40--55},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194809},
        urn       = {urn:nbn:de:101:1-201705194809},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The design of cyber-physical systems (CPS) is a promising domain, where the data market is expected to soon penetrate. When engineers focus on only a particular part of data (whether intentionally or not) for establishing a design hypothesis, the design hypothesis may also be supported by data sets in the market. Therefore, the validity of such a design hypothesis cannot be evaluated by the data itself, and can only be accepted by the robustness of the logic behind the design argumentation. Although the validation of the design logic is significant, cognitive aspects (which people have spontaneously) disturb the design argumentation reasoning. Therefore, a design method that overcomes the cognitive aspects is indispensable for the CPS designers. This work proposes a CPS design method using the interaction between logic and data sets with a logic visualization tool, and applies the proposed method to the design of a diagnosis system for semiconductor manufacture. The capability of the proposed method is also discussed and analyzed in this paper.}
    }
0 citation in 2017

 Open Access 

PatTrieSort - External String Sorting based on Patricia Tries

Sven Groppe, Dennis Heinrich, Stefan Werner, Christopher Blochwitz, Thilo Pionteck

Open Journal of Databases (OJDB), 2(1), Pages 36-50, 2015, Downloads: 13919, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194627 | GNL-LP: 1132360889 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Presentation: Video

Abstract: External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data.

BibTex:

    @Article{OJDB_2015v2i1n03_Groppe,
        title     = {PatTrieSort - External String Sorting based on Patricia Tries},
        author    = {Sven Groppe and
                     Dennis Heinrich and
                     Stefan Werner and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {36--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194627},
        urn       = {urn:nbn:de:101:1-201705194627},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {External merge sort belongs to the most efficient and widely used algorithms to sort big data: As much data as fits inside is sorted in main memory and afterwards swapped to external storage as so called initial run. After sorting all the data in this way block-wise, the initial runs are merged in a merging phase in order to retrieve the final sorted run containing the completely sorted original data. Patricia tries are one of the most space-efficient ways to store strings especially those with common prefixes. Hence, we propose to use patricia tries for initial run generation in an external merge sort variant, such that initial runs can become large compared to traditional external merge sort using the same main memory size. Furthermore, we store the initial runs as patricia tries instead of lists of sorted strings. As we will show in this paper, patricia tries can be efficiently merged having a superior performance in comparison to merging runs of sorted strings. We complete our discussion with a complexity analysis as well as a comprehensive performance evaluation, where our new approach outperforms traditional external merge sort by a factor of 4 for sorting over 4 billion strings of real world data.}
    }
0 citation in 2017

 Open Access 

Why Is This Link Here? Identifying Academic Web Interlinking Motivations in Nigerian Universities

Anthony Nwohiri

Open Journal of Web Technologies (OJWT), 2(1), Pages 4-15, 2015, Downloads: 5118, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291363 | GNL-LP: 1133021581 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper investigates the university websites of Nigeria, Africa's most populous nation. Its aim is to identify motivations why authors embed outbound hyperlinks on these websites. A classification scheme for academic web interlinking motivations was applied to over 5,000 hyperlinks pointing from the websites of 107 Nigerian universities. Classifying the motivations based on studying the source and target pages is a big challenge, especially due to the following three reasons: there could be many possible reasons available; guessing the true intentions of link creators could be a difficult task; multiple link creation motivations could exist. The pioneer application of Pearson's chi-square test of independence offers a better picture of motivations. The chi-square test identifies the significant differences in interlinking motivations, which are peculiar to Nigerian universities of a particular category (federal, state and private universities). The study is a stepping stone toward further research on feasibility of findings in other developing countries. Results obtained from this research will be of great use for academic webpage developers and web authors, and will modify their work towards improving the use of hyperlinks as one of the major communication tools on the Web.

BibTex:

    @Article{OJWT_2015v2i1n02_Nwohir,
        title     = {Why Is This Link Here? Identifying Academic Web Interlinking Motivations in Nigerian Universities},
        author    = {Anthony Nwohiri},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {4--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291363},
        urn       = {urn:nbn:de:101:1-201705291363},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper investigates the university websites of Nigeria, Africa's most populous nation. Its aim is to identify motivations why authors embed outbound hyperlinks on these websites. A classification scheme for academic web interlinking motivations was applied to over 5,000 hyperlinks pointing from the websites of 107 Nigerian universities. Classifying the motivations based on studying the source and target pages is a big challenge, especially due to the following three reasons: there could be many possible reasons available; guessing the true intentions of link creators could be a difficult task; multiple link creation motivations could exist. The pioneer application of Pearson's chi-square test of independence offers a better picture of motivations. The chi-square test identifies the significant differences in interlinking motivations, which are peculiar to Nigerian universities of a particular category (federal, state and private universities). The study is a stepping stone toward further research on feasibility of findings in other developing countries. Results obtained from this research will be of great use for academic webpage developers and web authors, and will modify their work towards improving the use of hyperlinks as one of the major communication tools on the Web.}
    }
0 citation in 2017

 Open Access 

Cooperative Hybrid Cloud Intermediaries - Making Cloud Sourcing Feasible for Small and Medium-sized Enterprises

Till Haselmann, Gottfried Vossen, Stuart Dillon

Open Journal of Cloud Computing (OJCC), 2(2), Pages 4-20, 2015, Downloads: 4881, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194494 | GNL-LP: 1132360714 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: "The cloud" is widely advertised as a silver bullet for many IT-related challenges of small and medium-sized enterprises (SMEs). While it can potentially have a number of attractive benefits, many SMEs refrain from using cloud sourcing and cloud services because of high upfront costs for building the appropriate knowledge in the enterprise, for searching and screening of possible cloud service providers, and for mastering the intricate legal issues related to outsourcing sensitive data. This paper presents the concept of hybrid cloud intermediaries, an approach that can address many of the prevailing issues. With the aid of empirical findings from a cross-nation study of cloud adoption in SMEs for context, we describe the concept in detail and show conceivable variants, including a comprehensive cross-perspective consolidated model of cloud intermediary value-creation. Subsequently, we analyze the benefits of such a hybrid cloud intermediary for addressing cloud adoption issues in SMEs, and suggest suitable governance structures based on the cooperative paradigm. The resulting entity - a cooperative hybrid cloud intermediary or, more concisely, co-op cloud - is discussed in detail showing both feasible scenarios and limitations for SMEs that would like to engage in a cloud-sourcing.

BibTex:

    @Article{OJCC_2015v2i2n02_Haselmann,
        title     = {Cooperative Hybrid Cloud Intermediaries - Making Cloud Sourcing Feasible for Small and Medium-sized Enterprises},
        author    = {Till Haselmann and
                     Gottfried Vossen and
                     Stuart Dillon},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {4--20},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194494},
        urn       = {urn:nbn:de:101:1-201705194494},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {"The cloud" is widely advertised as a silver bullet for many IT-related challenges of small and medium-sized enterprises (SMEs). While it can potentially have a number of attractive benefits, many SMEs refrain from using cloud sourcing and cloud services because of high upfront costs for building the appropriate knowledge in the enterprise, for searching and screening of possible cloud service providers, and for mastering the intricate legal issues related to outsourcing sensitive data. This paper presents the concept of hybrid cloud intermediaries, an approach that can address many of the prevailing issues. With the aid of empirical findings from a cross-nation study of cloud adoption in SMEs for context, we describe the concept in detail and show conceivable variants, including a comprehensive cross-perspective consolidated model of cloud intermediary value-creation. Subsequently, we analyze the benefits of such a hybrid cloud intermediary for addressing cloud adoption issues in SMEs, and suggest suitable governance structures based on the cooperative paradigm. The resulting entity - a cooperative hybrid cloud intermediary or, more concisely, co-op cloud - is discussed in detail showing both feasible scenarios and limitations for SMEs that would like to engage in a cloud-sourcing.}
    }
2 citations in 2017:

The Web at Graduation and Beyond

Gottfried Vossen, Frank Schönthaler, Stuart Dillon

Springer, 2017.

Security and Compliance Ontology for Cloud Service Agreements

Ana Sofía Zalazar, Luciana Ballejos, Sebastian Rodriguez

Open Journal of Cloud Computing (OJCC), 4(1), Pages 17-25, 2017.

 Open Access 

A Trust-Based Approach for Management of Dynamic QoS Violations in Cloud Federation Environments

Manoj V. Thomas, K. Chandrasekaran

Open Journal of Cloud Computing (OJCC), 2(2), Pages 21-43, 2015, Downloads: 6325, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194523 | GNL-LP: 1132360765 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Cloud Federation is an emerging technology where Cloud Service Providers (CSPs) offering specialized services to customers collaborate in order to reap the real benefits of Cloud Computing. When a CSP in the Cloud Federation runs out of resources, it can get the required resources from other partners in the federation. Normally, there will be QoS agreements between the partners in the federation for the resource sharing. In this paper, we propose a trust based mechanism for the management of dynamic QoS violations, when one CSP requests resources from another CSP in the federation. In this work, we have implemented the partner selection process, when one CSP does not have enough resources, using the Analytic Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods, and also considering the trust values of various CSPs in the federation. We have also implemented the Single Sign-On (SSO) authentication in the cloud federation using the Fully Hashed Menezes-Qu-Vanstone (FHMQV) protocol and AES-256 algorithm. The proposed trust-based approach is used to dynamically manage the QoS violations among the partners in the federation. We have implemented the proposed approach using the CloudSim toolkit, and the analysis of the results are also given.

BibTex:

    @Article{OJCC_2015v2i2n03_Thomas,
        title     = {A Trust-Based Approach for Management of Dynamic QoS Violations in Cloud Federation Environments},
        author    = {Manoj V. Thomas and
                     K. Chandrasekaran},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {21--43},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194523},
        urn       = {urn:nbn:de:101:1-201705194523},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Cloud Federation is an emerging technology where Cloud Service Providers (CSPs) offering specialized services to customers collaborate in order to reap the real benefits of Cloud Computing. When a CSP in the Cloud Federation runs out of resources, it can get the required resources from other partners in the federation. Normally, there will be QoS agreements between the partners in the federation for the resource sharing. In this paper, we propose a trust based mechanism for the management of dynamic QoS violations, when one CSP requests resources from another CSP in the federation. In this work, we have implemented the partner selection process, when one CSP does not have enough resources, using the Analytic Hierarchy Process (AHP) and the Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) methods, and also considering the trust values of various CSPs in the federation. We have also implemented the Single Sign-On (SSO) authentication in the cloud federation using the Fully Hashed Menezes-Qu-Vanstone (FHMQV) protocol and AES-256 algorithm. The proposed trust-based approach is used to dynamically manage the QoS violations among the partners in the federation. We have implemented the proposed approach using the CloudSim toolkit, and the analysis of the results are also given.}
    }
0 citation in 2017

 Open Access 

Advances in Cloud and Ubiquitous Computing

Sven Groppe, K. Chandrasekaran

Open Journal of Cloud Computing (OJCC), 2(2), Pages 1-3, 2015, Downloads: 7700

Full-Text: pdf | URN: urn:nbn:de:101:1-201704229173 | GNL-LP: 1130488268 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Cloud computing provides on-demand access to a shared pool of configurable and dynamically reallocated computing resources typically located in third-party data centers. Ubiquitous computing aims at providing computing resources anytime and everywhere by using any device, in any location, and in any format. This special issue, Advances in Cloud and Ubiquitous Computing (ACUC), aims at addressing the challenges and reporting the latest research findings in the fields of Cloud computing and Ubiquitous Computing respectively, and how new technologies of Cloud Computing and Ubiquitous Computing complete each other.

BibTex:

    @Article{OJCC_2015v2i2n01e_Groppe,
        title     = {Advances in Cloud and Ubiquitous Computing},
        author    = {Sven Groppe and
                     K. Chandrasekaran},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2015},
        volume    = {2},
        number    = {2},
        pages     = {1--3},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704229173},
        urn       = {urn:nbn:de:101:1-201704229173},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Cloud computing provides on-demand access to a shared pool of configurable and dynamically reallocated computing resources typically located in third-party data centers. Ubiquitous computing aims at providing computing resources anytime and everywhere by using any device, in any location, and in any format. This special issue, Advances in Cloud and Ubiquitous Computing (ACUC), aims at addressing the challenges and reporting the latest research findings in the fields of Cloud computing and Ubiquitous Computing respectively, and how new technologies of Cloud Computing and Ubiquitous Computing complete each other.}
    }
0 citations in 2017

 Open Access 

Detecting Vital Documents in Massive Data Streams

Shun Kawahara, Kazuhiro Seki, Kuniaki Uehara

Open Journal of Web Technologies (OJWT), 2(1), Pages 16-26, 2015, Downloads: 6657, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291373 | GNL-LP: 113302159X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Existing knowledge bases, includingWikipedia, are typically written and maintained by a group of voluntary editors. Meanwhile, numerous web documents are being published partly due to the popularization of online news and social media. Some of the web documents, called "vital documents", contain novel information that should be taken into account in updating articles of the knowledge bases. However, it is practically impossible for the editors to manually monitor all the relevant web documents. Consequently, there is a considerable time lag between an edit to knowledge base and the publication dates of such vital documents. This paper proposes a realtime detection framework of web documents containing novel information flowing in massive document streams. The framework consists of twostep filter using statistical language models. Further, the framework is implemented on the distributed and faulttolerant realtime computation system, Apache Storm, in order to process the large number of web documents. On a publicly available web document data set, the TREC KBA Stream Corpus, the validity of the proposed framework is demonstrated in terms of the detection performance and processing time.

BibTex:

    @Article{OJWT_2015v2i1n03_Kawahara,
        title     = {Detecting Vital Documents in Massive Data Streams},
        author    = {Shun Kawahara and
                     Kazuhiro Seki and
                     Kuniaki Uehara},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {16--26},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291373},
        urn       = {urn:nbn:de:101:1-201705291373},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Existing knowledge bases, includingWikipedia, are typically written and maintained by a group of voluntary editors. Meanwhile, numerous web documents are being published partly due to the popularization of online news and social media. Some of the web documents, called "vital documents", contain novel information that should be taken into account in updating articles of the knowledge bases. However, it is practically impossible for the editors to manually monitor all the relevant web documents. Consequently, there is a considerable time lag between an edit to knowledge base and the publication dates of such vital documents. This paper proposes a realtime detection framework of web documents containing novel information flowing in massive document streams. The framework consists of twostep filter using statistical language models. Further, the framework is implemented on the distributed and faulttolerant realtime computation system, Apache Storm, in order to process the large number of web documents. On a publicly available web document data set, the TREC KBA Stream Corpus, the validity of the proposed framework is demonstrated in terms of the detection performance and processing time.}
    }
0 citation in 2017

 Open Access 

Distributed Join Approaches for W3C-Conform SPARQL Endpoints

Sven Groppe, Dennis Heinrich, Stefan Werner

Open Journal of Semantic Web (OJSW), 2(1), Pages 30-52, 2015, Downloads: 11770, Citations: 6

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194910 | GNL-LP: 1132361303 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Presentation: Video

Abstract: Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.

BibTex:

    @Article{OJSW_2015v2i1n04_Groppe,
        title     = {Distributed Join Approaches for W3C-Conform SPARQL Endpoints},
        author    = {Sven Groppe and
                     Dennis Heinrich and
                     Stefan Werner},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {30--52},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194910},
        urn       = {urn:nbn:de:101:1-201705194910},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Currently many SPARQL endpoints are freely available and accessible without any costs to users: Everyone can submit SPARQL queries to SPARQL endpoints via a standardized protocol, where the queries are processed on the datasets of the SPARQL endpoints and the query results are sent back to the user in a standardized format. As these distributed execution environments for semantic big data (as intersection of semantic data and big data) are freely accessible, the Semantic Web is an ideal playground for big data research. However, when utilizing these distributed execution environments, questions about the performance arise. Especially when several datasets (locally and those residing in SPARQL endpoints) need to be combined, distributed joins need to be computed. In this work we give an overview of the various possibilities of distributed join processing in SPARQL endpoints, which follow the SPARQL specification and hence are "W3C conform". We also introduce new distributed join approaches as variants of the Bitvector-Join and combination of the Semi- and Bitvector-Join. Finally we compare all the existing and newly proposed distributed join approaches for W3C conform SPARQL endpoints in an extensive experimental evaluation.}
    }
3 citations in 2017:

Assessing and Improving Domain Knowledge Representation in DBpedia

Ludovic Font, Amal Zouaq, Michel Gagnon

Open Journal of Semantic Web (OJSW), 4(1), Pages 1-19, 2017.

Extended Adaptive Join Operator with Bind-Bloom Join for Federated SPARQL Queries

Damla Oguz, Shaoyi Yin, Belgin Ergenç, Abdelkader Hameurlain, Oguz Dikenelli

International Journal of Data Warehousing and Mining (IJDWM), 13(3), Pages 47-72, 2017.

Ontology-Based Data Integration in Multi-Disciplinary Engineering Environments: A Review

Fajar J. Ekaputra, Marta Sabou, Estefanía Serral, Elmar Kiesling, Stefan Biffl

Open Journal of Information Systems (OJIS), 4(1), Pages 1-26, 2017.

 Open Access 

Context-Dependent Testing of Applications for Mobile Devices

Tim A. Majchrzak, Matthias Schulte

Open Journal of Web Technologies (OJWT), 2(1), Pages 27-39, 2015, Downloads: 6934, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291390 | GNL-LP: 1133021646 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Applications propel the versatility of mobile devices. Apps enable the realization of new ideas and greatly contribute to the proliferation of mobile computing. Unfortunately, software quality of apps often is low. This at least partly can be attributed to problems with testing them. However, it is not a lack of techniques or tools that make app testing cumbersome. Rather, frequent context changes have to be dealt with. Mobile devices most notably move: network parameters such as latency and usable bandwidth change, along with data read from sensors such as GPS coordinates. Additionally, usage patterns vary. To address context changes in testing, we propose a novel concept. It is based on identifying blocks of code between which context changes are possible. It helps to greatly reduce complexity. Besides introducing our concept, we present a use case, show its application and benefits, and discuss challenges.

BibTex:

    @Article{OJWT_2015v2i1n04_Majchrzak,
        title     = {Context-Dependent Testing of Applications for Mobile Devices},
        author    = {Tim A. Majchrzak and
                     Matthias Schulte},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {27--39},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291390},
        urn       = {urn:nbn:de:101:1-201705291390},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Applications propel the versatility of mobile devices. Apps enable the realization of new ideas and greatly contribute to the proliferation of mobile computing. Unfortunately, software quality of apps often is low. This at least partly can be attributed to problems with testing them. However, it is not a lack of techniques or tools that make app testing cumbersome. Rather, frequent context changes have to be dealt with. Mobile devices most notably move: network parameters such as latency and usable bandwidth change, along with data read from sensors such as GPS coordinates. Additionally, usage patterns vary. To address context changes in testing, we propose a novel concept. It is based on identifying blocks of code between which context changes are possible. It helps to greatly reduce complexity. Besides introducing our concept, we present a use case, show its application and benefits, and discuss challenges.}
    }
1 citation in 2017:

How Cross-Platform Technology Can Facilitate Easier Creation of Business Apps

Tim A. Majchrzak, Jan C. Dageförde, Jan Ernsting, Christoph Rieger, Tobias Reischmann

In Apps Management and E-Commerce Transactions in Real-Time, Pages 104-140, 2017.

 Open Access 

Statistical Machine Learning in Brain State Classification using EEG Data

Yuezhe Li, Yuchou Chang, Hong Lin

Open Journal of Big Data (OJBD), 1(2), Pages 19-33, 2015, Downloads: 11173, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194354 | GNL-LP: 113236051X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.

BibTex:

    @Article{OJBD_2015v1i2n03_YuehzeLi,
        title     = {Statistical Machine Learning in Brain State Classification using EEG Data},
        author    = {Yuezhe Li and
                     Yuchou Chang and
                     Hong Lin},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {19--33},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194354},
        urn       = {urn:nbn:de:101:1-201705194354},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this article, we discuss how to use a variety of machine learning methods, e.g. tree bagging, random forest, boost, support vector machine, and Gaussian mixture model, for building classifiers for electroencephalogram (EEG) data, which is collected from different brain states on different subjects. Also, we discuss how training data size influences misclassification rate. Moreover, the number of subjects that contributes to the training data affects misclassification rate. Furthermore, we discuss how sample entropy contributes to building a classifier. Our results show that classification based on sample entropy give the smallest misclassification rate. Moreover, two data sets were collected from one channel and seven channels respectively. The classification results of each data set show that the more channels we use, the less misclassification we have. Our results show that it is promising to build a self-adaptive classification system by using EEG data to distinguish idle from active state.}
    }
1 citation in 2017:

Using EEG Data Analytics to Measure Meditation.

Hong Lin, Yuezhe Li

In Digital Human Modeling. Applications in Health, Safety, Ergonomics, and Risk Management: Health and Safety - 8th International Conference, DHM 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9-14, 2017, Proceedings, Part II, Pages 270-280, 2017.

 Open Access 

Data Transfers in Hadoop: A Comparative Study

Ujjal Marjit, Kumar Sharma, Puspendu Mandal

Open Journal of Big Data (OJBD), 1(2), Pages 34-46, 2015, Downloads: 13911, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194373 | GNL-LP: 1132360536 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.

BibTex:

    @Article{OJBD_2015v1i2n04_UjjalMarjit,
        title     = {Data Transfers in Hadoop: A Comparative Study},
        author    = {Ujjal Marjit and
                     Kumar Sharma and
                     Puspendu Mandal},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2015},
        volume    = {1},
        number    = {2},
        pages     = {34--46},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194373},
        urn       = {urn:nbn:de:101:1-201705194373},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Hadoop is an open source framework for processing large amounts of data in distributed computing environment. It plays an important role in processing and analyzing the Big Data. This framework is used for storing data on large clusters of commodity hardware. Data input and output to and from Hadoop is an indispensable action for any data processing job. At present, many tools have been evolved for importing and exporting Data in Hadoop. In this article, some commonly used tools for importing and exporting data have been emphasized. Moreover, a state-of-the-art comparative study among the various tools has been made. With this study, it has been decided that where to use one tool over the other with emphasis on the data transfer to and from Hadoop system. This article also discusses about how Hadoop handles backup and disaster recovery along with some open research questions in terms of Big Data transfer when dealing with cloud-based services.}
    }
0 citation in 2017

 Open Access 

Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing

Karolj Skala, Davor Davidovic, Enis Afgan, Ivan Sovic, Zorislav Sojat

Open Journal of Cloud Computing (OJCC), 2(1), Pages 16-24, 2015, Downloads: 22323, Citations: 168

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194519 | GNL-LP: 1132360749 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.

BibTex:

    @Article{OJCC_2015v2i1n03_Skala,
        title     = {Scalable Distributed Computing Hierarchy: Cloud, Fog and Dew Computing},
        author    = {Karolj Skala and
                     Davor Davidovic and
                     Enis Afgan and
                     Ivan Sovic and
                     Zorislav Sojat},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2015},
        volume    = {2},
        number    = {1},
        pages     = {16--24},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194519},
        urn       = {urn:nbn:de:101:1-201705194519},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The paper considers the conceptual approach for organization of the vertical hierarchical links between the scalable distributed computing paradigms: Cloud Computing, Fog Computing and Dew Computing. In this paper, the Dew Computing is described and recognized as a new structural layer in the existing distributed computing hierarchy. In the existing computing hierarchy, the Dew computing is positioned as the ground level for the Cloud and Fog computing paradigms. Vertical, complementary, hierarchical division from Cloud to Dew Computing satisfies the needs of high- and low-end computing demands in everyday life and work. These new computing paradigms lower the cost and improve the performance, particularly for concepts and applications such as the Internet of Things (IoT) and the Internet of Everything (IoE). In addition, the Dew computing paradigm will require new programming models that will efficiently reduce the complexity and improve the productivity and usability of scalable distributed computing, following the principles of High-Productivity computing.}
    }
41 citations in 2017:

On-demand energy monitoring and response architecture in a ubiquitous world

Oihane Kamara-Esteban, Ander Pijoan, Ainhoa Alonso-Vicario, Cruz E. Borges

Personal and Ubiquitous Computing, 21(3), Pages 537-551, 2017.

Internet of Things Framework for Home Care Systems

Biljana Risteska Stojkoska, Kire Trivodaliev, Danco Davcev

Wireless Communications and Mobile Computing, 2017.

Distributed Deep Neural Networks Over the Cloud, the Edge and End Devices

Surat Teerapittayanon, Bradley McDanel, H. T. Kung

In 37th IEEE International Conference on Distributed Computing Systems (ICDCS 2017), Atlanta, GA, USA, Pages 328-339, 2017.

The Hierarchical Distributed Agent Based Approach to a Modern Data Center Management

Andrey Gavrilov, Yury Leokhin

In ITM Web of Conferences, 2017.

Augmented Coaching Ecosystem for Non-obtrusive Adaptive Personalized Elderly Care on the Basis of Cloud-Fog-Dew Computing Paradigm

Yuri Gordienko, Sergii Stirenko, Oleg Alienin, Karolj Skala, Z. Soyat, Anis Rojbi, Jorge R. López Benito, E. Artetxe González, U. Lushchyk, L. Sajn, A. Llorente Coto, G. Jervan

CoRR, abs/1704.04988, 2017.

Diet-ESP: IP layer security for IoT

Daniel Migault, Tobias Guggemos, Sylvain Killian, Maryline Laurent, Guy Pujolle, Jean-Philippe Wary

Journal of Computer Security, 25(2), Pages 173-203, 2017.

Advanced mobile and wearable systems

Lech Jóźwiak

Microprocessors and Microsystems, 50, Pages 202 - 221, 2017.

Do we all really know what a Fog Node is? Current trends towards an open definition

Eva Marín-Tordera, Xavier Masip-Bruin, Jordi Garcia Almiñana, Admela Jukan, Guang-Jie Ren, Jiafeng Zhu

Computer Communications, 109, Pages 117-130, 2017.

Towards a Model-driven Performance Prediction Approach for Internet of Things Architectures

Johannes Kroß, Sebastian Voss, Helmut Krcmar

Open Journal of Internet Of Things (OJIOT), 3(1), Pages 136-141, 2017.

Cloud-Dew computing support for automatic data analysis in life sciences

P. Brezany, T. Ludescher, T. Feilhauer

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 365-370, 2017.

The application of production-related information technology architecture to improve on visual management systems within the manufacturing industry

Lukas Petrus Steenkamp

2017. PhD thesis at Stellenbosch University

Performance Evaluation of Distributed Computing Environments with Hadoop and Spark Frameworks

Vladyslav Taran, Oleg Alienin, Sergii Stirenko, A Rojbi, Yuri Gordienko

arXiv preprint arXiv:1707.04939, 2017.

Distributed Database System as a base for multilanguage support for legacy software

Nenad Crnko

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 371-374, 2017.

Fog Computing and Edge Computing Architectures for Processing Data From Diabetes Devices Connected to the Medical Internet of Things

David C. Klonoff

Journal of Diabetes Science and Technology, 11(4), Pages 647-652, 2017.

Service-oriented application for parallel solving the Parametric Synthesis Feedback problem of controlled dynamic systems

G. A. Oparin, V. G. Bogdanova, S. A. Gorsky, A. A. Pashinin

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 353-358, 2017.

A dew computing solution for IoT streaming devices

Marjan Gusev

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 387-392, 2017.

A Framework for Enabling Security Services Collaboration Across Multiple Domains

Daniel Migault, Marcos A. Simplicio, Bruno M. Barros, Makan Pourzandi, Thiago R.M. Almeida, Ewerton R. Andrade, Tereza C.M.B. Carvalho

In 37th International Conference on Distributed Computing Systems (ICDCS), Pages 999-1010, 2017.

Development of Cable Distribution Cabinets by utilizing digital technology and connected devices

Joakim Larsson, Carl Tööj

2017. Master’s thesis at Chalmers University of Technology

The dawn of Dew: Dew Computing for advanced living environment

Zorislav Sojaat, Karolj Skala

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 347-352, 2017.

Architecting a hybrid cross layer dew-fog-cloud stack for future data-driven cyber-physical systems

Marc Frincu

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 399-403, 2017.

3D-based location positioning using the Dew Computing approach for indoor navigation

D. Podbojec, B. Herynek, D. Jazbec, M. Cvetko, M. Debevc, I. Kožuh

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 393-398, 2017.

人の感性に着目したスマートデバイスによるセンシング方式の研究 (Emotion-focused methodology for smart device sensing)

Hiroshi Jogasaki

2017. Doctoral Thesis at Graduate School of Systems Information Science Future University Hakodate

State-of-the-art of cloud solutions based on ECG sensors.

Marjan Gusev, Ana Guseva

In IEEE EUROCON 2017 -17th International Conference on Smart Technologies, Ohrid, Macedonia, July 6-8, 2017, Pages 501-506, 2017.

Congestion Aware Packet Routing for Delay Sensitive Cloud Communications

Vincent O. Nyangaresi, Silvance O. Abeka, Solomon O. Ogara

International Journal of Computer Networks and Applications (IJCNA), 4(4), Pages 93-104, 2017.

The Interdependent Part of Cloud Computing:Dew Computing

Hiral M. Patel, Rupal R. Chaudhari, Kinjal R. Prajapati, Ami A. Patel

In Intelligent Communication and Computational Technologies: Proceedings of Internet of Things for Technological Development (IoT4TD), Pages 1-9, 2017.

Security and Compliance Ontology for Cloud Service Agreements

Ana Sofía Zalazar, Luciana Ballejos, Sebastian Rodriguez

Open Journal of Cloud Computing (OJCC), 4(1), Pages 17-25, 2017.

Time Series Distributed Analysis in IoT with ETL and Data Mining Technologies.

Ivan Kholod, Maria Efimova, Andrey Rukavitsyn, Andrey Shorov

In Internet of Things, Smart Spaces, and Next Generation Networks and Systems - 17th International Conference, NEW2AN 2017, 10th Conference, ruSMART 2017, Third Workshop NsCC 2017, St. Petersburg, Russia, Pages 97-108, 2017.

Performance Aspects of Object-based Storage Services on Single Board Computers

Christian Baun, Henry-Norbert Cocos, Rosa-Maria Spanou

Open Journal of Cloud Computing (OJCC), 4(1), Pages 1-16, 2017.

Evolution of the Distributed Computing Paradigms: a Brief Road Map

Haitham Barkallah, Mariem Gzara, Hanene Ben Abdallah

International Journal of Computing and Digital Systems, 6(5), 2017.

Machine Learning on Large Databases: Transforming Hidden Markov Models to SQL Statements

Dennis Marten, Andreas Heuer

Open Journal of Databases (OJDB), 4(1), Pages 22-42, 2017.

Post-cloud computing paradigms: a survey and comparison

Y. Zhou, D. Zhang, N. Xiong

Tsinghua Science and Technology, 22(6), Pages 714-732, 2017.

Fog over Virtualized IoT: New Opportunity for Context-Aware Networked Applications and a Case Study

Paola G. V. Naranjo, Zahra Pooranian, Shahaboddin Shamshirband, Jemal H. Abawajy, Mauro Conti

Applied Sciences, 7(12), 2017.

Dew Computing and Transition of Internet Computing Paradigms

Yingwei Wang, Karolj Skala, Andy Rindos, Marjan Gusev, Shuhui Yang, Yi Pan

ZTE Communications, 15(4), 2017.

Smart Supply-Chain Management Learning System for Homeopathy.

Mulay Preeti, Kadlag Swati, Shirodkar Ruchi

Indian Journal of Public Health Research & Development, 8(4), Pages 914-922, 2017.

Mobile wireless sensor network gateway: A raspberry Pi implementation with a VPN backend to OpenStack

Eduard-Florentin Luchian, Adrian Taut, Iustin-Alexandru Ivanciu, Gabriel Lazar, Virgil Dobrota

In 25th International Conference on Software, Telecommunications and Computer Networks, SoftCOM 2017, Split, Croatia, September 21-23, 2017, Pages 1-5, 2017.

A novel approach for securely processing information on dew sites (Dew computing) in collaboration with cloud computing: An approach toward latest research trends on Dew computing

H. Patel, K. Suthar

In Nirma University International Conference on Engineering (NUiCONE), Pages 1-6, 2017.

Web-сервис синтеза линейной обратной связи для двоичных динамических систем

Вера Геннадьевна Богданова, Сергей Алексеевич Горский, Антон Алексеевич Пашинин

Информационные и математические технологии в науке и управлении, Pages 62-70, 2017.

Cloud-fog-dew architecture for refined driving assistance: The complete service computing ecosystem

Tushar S. Mane, Himanshu Agrawal

In 17th IEEE International Conference on Ubiquitous Wireless Broadband, ICUWB 2017, Salamanca, Spain, September 12-15, 2017, Pages 1-7, 2017.

Cloud Computing: A Review

Richa Singla, Richa Dutta

International Journal of Computer Science and Electronics (IJCSC), 8(1), Pages 55-62, 2017.

Security Enhanced Internet of Vehicles with Cloud-Fog-Dew Computing

Ziqian Meng, Zhi Guan, Zhengang Wu, Anran Li, Zhong Chen

ZTE Communications, 15(S2), Pages 47-51, 2017.

Исследование возможностей построения плоскости управления в центрах обработки данных на базе агентов для различных архитектур и систем

АВ Гаврилов

In Электронный бизнес. Управление интернет-проектами. Инновации, Pages 121-125, 2017.

 Open Access 

A NoSQL-Based Framework for Managing Home Services

Marinette Bouet, Michel Schneider

Open Journal of Information Systems (OJIS), 3(1), Pages 1-28, 2016, Downloads: 11274, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194810 | GNL-LP: 113236115X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Individuals and companies have an increasing need for services by specialized suppliers in their homes or premises. These services can be quite different and can require different amounts of resources. Service suppliers have to specify the activities to be performed, plan those activities, allocate resources, follow up after their completion and must be able to react to any unexpected situation. Various proposals were formulated to model and implement these functions; however, there is no unified approach that can improve the efficiency of software solutions to enable economy of scale. In this paper, we propose a framework that a service supplier can use to manage geo-localized activities. The proposed framework is based on a NoSQL data model and implemented using the MongoDB system. We also discuss the advantages and drawbacks of a NoSQL approach.

BibTex:

    @Article{OJIS_2016v3i1n02_Marinette,
        title     = {A NoSQL-Based Framework for Managing Home Services},
        author    = {Marinette Bouet and
                     Michel Schneider},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194810},
        urn       = {urn:nbn:de:101:1-201705194810},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Individuals and companies have an increasing need for services by specialized suppliers in their homes or premises. These services can be quite different and can require different amounts of resources. Service suppliers have to specify the activities to be performed, plan those activities, allocate resources, follow up after their completion and must be able to react to any unexpected situation. Various proposals were formulated to model and implement these functions; however, there is no unified approach that can improve the efficiency of software solutions to enable economy of scale. In this paper, we propose a framework that a service supplier can use to manage geo-localized activities. The proposed framework is based on a NoSQL data model and implemented using the MongoDB system. We also discuss the advantages and drawbacks of a NoSQL approach.}
    }
0 citation in 2017

 Open Access 

High-Dimensional Spatio-Temporal Indexing

Mathias Menninghaus, Martin Breunig, Elke Pulvermüller

Open Journal of Databases (OJDB), 3(1), Pages 1-20, 2016, Downloads: 10502

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194635 | GNL-LP: 1132360897 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There exist numerous indexing methods which handle either spatio-temporal or high-dimensional data well. However, those indexing methods which handle spatio-temporal data well have certain drawbacks when confronted with high-dimensional data. As the most efficient spatio-temporal indexing methods are based on the R-tree and its variants, they face the well known problems in high-dimensional space. Furthermore, most high-dimensional indexing methods try to reduce the number of dimensions in the data being indexed and compress the information given by all dimensions into few dimensions but are not able to store now - relative data. One of the most efficient high-dimensional indexing methods, the Pyramid Technique, is able to handle high-dimensional point-data only. Nonetheless, we take this technique and extend it such that it is able to handle spatio-temporal data as well. We introduce a technique for querying in this structure with spatio-temporal queries. We compare our technique, the Spatio-Temporal Pyramid Adapter (STPA), to the RST-tree for in-memory and on-disk applications. We show that for high dimensions, the extra query-cost for reducing the dimensionality in the Pyramid Technique is clearly exceeded by the rising query-cost in the RST-tree. Concluding, we address the main drawbacks and advantages of our technique.

BibTex:

    @Article{OJDB_2016v3i1n01_Menninghaus,
        title     = {High-Dimensional Spatio-Temporal Indexing},
        author    = {Mathias Menninghaus and
                     Martin Breunig and
                     Elke Pulverm\~{A}ller},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--20},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194635},
        urn       = {urn:nbn:de:101:1-201705194635},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There exist numerous indexing methods which handle either spatio-temporal or high-dimensional data well. However, those indexing methods which handle spatio-temporal data well have certain drawbacks when confronted with high-dimensional data. As the most efficient spatio-temporal indexing methods are based on the R-tree and its variants, they face the well known problems in high-dimensional space. Furthermore, most high-dimensional indexing methods try to reduce the number of dimensions in the data being indexed and compress the information given by all dimensions into few dimensions but are not able to store now - relative data. One of the most efficient high-dimensional indexing methods, the Pyramid Technique, is able to handle high-dimensional point-data only. Nonetheless, we take this technique and extend it such that it is able to handle spatio-temporal data as well. We introduce a technique for querying in this structure with spatio-temporal queries. We compare our technique, the Spatio-Temporal Pyramid Adapter (STPA), to the RST-tree for in-memory and on-disk applications. We show that for high dimensions, the extra query-cost for reducing the dimensionality in the Pyramid Technique is clearly exceeded by the rising query-cost in the RST-tree. Concluding, we address the main drawbacks and advantages of our technique.}
    }
0 citations in 2017

 Open Access 

Criteria of Successful IT Projects from Management's Perspective

Mark Harwardt

Open Journal of Information Systems (OJIS), 3(1), Pages 29-54, 2016, Downloads: 19574, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194797 | GNL-LP: 1132361133 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.

BibTex:

    @Article{OJIS_2016v3i1n02_Harwardt,
        title     = {Criteria of Successful IT Projects from Management's Perspective},
        author    = {Mark Harwardt},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {29--54},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194797},
        urn       = {urn:nbn:de:101:1-201705194797},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The aim of this paper is to compile a model of IT project success from management's perspective. Therefore, a qualitative research approach is proposed by interviewing IT managers on how their companies evaluate the success of IT projects. The evaluation of the survey provides fourteen success criteria and four success dimensions. This paper also thoroughly analyzes which of these criteria the management considers especially important and which ones are being missed in daily practice. Additionally, it attempts to identify the relevance of the discovered criteria and dimensions with regard to the determination of IT project success. It becomes evident here that the old-fashioned Iron Triangle still plays a leading role, but some long-term strategical criteria, such as value of the project, customer perspective or impact on the organization, have meanwhile caught up or pulled even.}
    }
1 citation in 2017:

Servant Leadership in der IT

Mark Harwardt

2017. PhD thesis at WHU - Otto Beisheim School of Management

 Open Access 

Definition and Categorization of Dew Computing

Yingwei Wang

Open Journal of Cloud Computing (OJCC), 3(1), Pages 1-7, 2016, Downloads: 13400, Citations: 69

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194546 | GNL-LP: 1132360781 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.

BibTex:

    @Article{OJCC_2016v3i1n02_YingweiWang,
        title     = {Definition and Categorization of Dew Computing},
        author    = {Yingwei Wang},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--7},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194546},
        urn       = {urn:nbn:de:101:1-201705194546},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Dew computing is an emerging new research area and has great potentials in applications. In this paper, we propose a revised definition of dew computing. The new definition is: Dew computing is an on-premises computer software-hardware organization paradigm in the cloud computing environment where the on-premises computer provides functionality that is independent of cloud services and is also collaborative with cloud services. The goal of dew computing is to fully realize the potentials of on-premises computers and cloud services. This definition emphasizes two key features of dew computing: independence and collaboration. Furthermore, we propose a group of dew computing categories. These categories may inspire new applications.}
    }
15 citations in 2017:

Cloud-Dew computing support for automatic data analysis in life sciences

Peter Brezany, Thomas Ludescher, Thomas Feilhauer

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 365-370, 2017.

Distributed Database System as a base for multilanguage support for legacy software

Nenad Crnko

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 371-374, 2017.

A dew computing solution for IoT streaming devices

Marjan Gusev

In 40th International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), Pages 387-392, 2017.

The Interdependent Part of Cloud Computing:Dew Computing

Hiral M. Patel, Rupal R. Chaudhari, Kinjal R. Prajapati, Ami A. Patel

In Intelligent Communication and Computational Technologies: Proceedings of Internet of Things for Technological Development (IoT4TD), Pages 1-9, 2017.

Security and Compliance Ontology for Cloud Service Agreements

Ana Sofía Zalazar, Luciana Ballejos, Sebastian Rodriguez

Open Journal of Cloud Computing (OJCC), 4(1), Pages 17-25, 2017.

Performance Aspects of Object-based Storage Services on Single Board Computers

Christian Baun, Henry-Norbert Cocos, Rosa-Maria Spanou

Open Journal of Cloud Computing (OJCC), 4(1), Pages 1-16, 2017.

Internet de las Cosas en las Instituciones de Educación Superior

Johan Smith Rueda-Rueda, Johana Andrea Manrique, José Daniel Cabrera Cruz

In Congreso Internacional en Innovación y Apropiación de las Tecnologías de la Información y las Comunicaciones (CIINATIC), 2017.

Post-cloud computing paradigms: a survey and comparison

Y. Zhou, D. Zhang, N. Xiong

Tsinghua Science and Technology, 22(6), Pages 714-732, 2017.

Dew Computing and Transition of Internet Computing Paradigms

Yingwei Wang, Karolj Skala, Andy Rindos, Marjan Gusev, Shuhui Yang, Yi Pan

ZTE Communications, 15(4), 2017.

Cloud-fog-dew architecture for refined driving assistance: The complete service computing ecosystem

Tushar S. Mane, Himanshu Agrawal

In International Conference on Ubiquitous Wireless Broadband (ICUWB), Salamanca, Spain, Pages 1-7, 2017.

A novel approach for securely processing information on dew sites (Dew computing) in collaboration with cloud computing: An approach toward latest research trends on Dew computing

H. Patel, K. Suthar

In Nirma University International Conference on Engineering (NUiCONE), Pages 1-6, 2017.

ABORDAREA DEW COMPUTING CA EXTENSIE A ARHITECTURILOR ORIENTATE CLOUD-ANALIZĂ DE OPORTUNITATE

Gabriel Neagu, Marilena Ianculescu

Revista Română de Informatică şi Automatică, 27(4), 2017.

Internet de las Cosas en las Instituciones de Educación Superior

Johan Rueda-Rueda, Johana Manrique, Jose Cabrera Cruz

In Congreso Internacional en Innovación y Apropiación de las Tecnologías de la Información y las Comunicaciones – CIINATIC, At: Cúcuta, Colombia, Volume: 1, 2017.

Abordarea dew computing ca extensie a arhitecturilor orientate cloud-analiză de oportunitate

Gabriel Neagu, Marilena Ianculescu

Revista Română de Informatică și Automatică, 27(4), Pages 5-14, 2017.

Rolul influenţei sociale în acceptarea Facebook: testarea unui model TAM extins

Irina Cristescu

Revista Română de Informatică şi Automatică, 27(3), Pages 37-46, 2017.

 Open Access 

Runtime Adaptive Hybrid Query Engine based on FPGAs

Stefan Werner, Dennis Heinrich, Sven Groppe, Christopher Blochwitz, Thilo Pionteck

Open Journal of Databases (OJDB), 3(1), Pages 21-41, 2016, Downloads: 13804, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194645 | GNL-LP: 1132360900 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.

BibTex:

    @Article{OJDB_2016v3i1n02_Werner,
        title     = {Runtime Adaptive Hybrid Query Engine based on FPGAs},
        author    = {Stefan Werner and
                     Dennis Heinrich and
                     Sven Groppe and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {21--41},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194645},
        urn       = {urn:nbn:de:101:1-201705194645},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper presents the fully integrated hardware-accelerated query engine for large-scale datasets in the context of Semantic Web databases. As queries are typically unknown at design time, a static approach is not feasible and not flexible to cover a wide range of queries at system runtime. Therefore, we introduce a runtime reconfigurable accelerator based on a Field Programmable Gate Array (FPGA), which transparently incorporates with the freely available Semantic Web database LUPOSDATE. At system runtime, the proposed approach dynamically generates an optimized hardware accelerator in terms of an FPGA configuration for each individual query and transparently retrieves the query result to be displayed to the user. During hardware-accelerated execution the host supplies triple data to the FPGA and retrieves the results from the FPGA via PCIe interface. The benefits and limitations are evaluated on large-scale synthetic datasets with up to 260 million triples as well as the widely known Billion Triples Challenge.	}
    }
2 citations in 2017:

Semi-static operator graphs for accelerated query execution on FPGAs

Stefan Werner, Dennis Heinrich, Thilo Pionteck, Sven Groppe

Microprocessors and Microsystems, 53, Pages 178 - 189, 2017.

Search & Update Optimization of a B+ Tree in a Hardware Aided Semantic Web Database System

Dennis Heinrich, Stefan Werner, Christopher Blochwitz, Thilo Pionteck, Sven Groppe

In Proceedings of the 7th International Conference on Emerging Databases: Technologies, Applications, and Theory, Pages 172-182, 2017.

 Open Access 

Query Processing in a P2P Network of Taxonomy-based Information Sources

Carlo Meghini, Anastasia Analyti

Open Journal of Web Technologies (OJWT), 3(1), Pages 1-25, 2016, Downloads: 6308

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291402 | GNL-LP: 1133021654 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In this study we address the problem of answering queries over a peer-to-peer system of taxonomy-based sources. A taxonomy states subsumption relationships between negation-free DNF formulas on terms and negation-free conjunctions of terms. To the end of laying the foundations of our study, we first consider the centralized case, deriving the complexity of the decision problem and of query evaluation. We conclude by presenting an algorithm that is efficient in data complexity and is based on hypergraphs. We then move to the distributed case, and introduce a logical model of a network of taxonomy-based sources. On such network, a distributed version of the centralized algorithm is then presented, based on a message passing paradigm, and its correctness is proved. We finally discuss optimization issues, and relate our work to the literature.

BibTex:

    @Article{OJWT_2016v3i1n02_Meghini,
        title     = {Query Processing in a P2P Network of Taxonomy-based Information Sources},
        author    = {Carlo Meghini and
                     Anastasia Analyti},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291402},
        urn       = {urn:nbn:de:101:1-201705291402},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In this study we address the problem of answering queries over a peer-to-peer system of taxonomy-based sources. A taxonomy states subsumption relationships between negation-free DNF formulas on terms and negation-free conjunctions of terms. To the end of laying the foundations of our study, we first consider the centralized case, deriving the complexity of the decision problem and of query evaluation. We conclude by presenting an algorithm that is efficient in data complexity and is based on hypergraphs. We then move to the distributed case, and introduce a logical model of a network of taxonomy-based sources. On such network, a distributed version of the centralized algorithm is then presented, based on a message passing paradigm, and its correctness is proved. We finally discuss optimization issues, and relate our work to the literature.}
    }
0 citations in 2017

 Open Access 

A 24 GHz FM-CW Radar System for Detecting Closed Multiple Targets and Its Applications in Actual Scenes

Kazuhiro Yamaguchi, Mitumasa Saito, Takuya Akiyama, Tomohiro Kobayashi, Naoki Ginoza, Hideaki Matsue

Open Journal of Internet Of Things (OJIOT), 2(1), Pages 1-15, 2016, Downloads: 13684, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201704245003 | GNL-LP: 1130623858 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper develops a 24 GHz band FM-CW radar system to detect closed multiple targets in a small displacement environment, and its performance is analyzed by computer simulation. The FM-CW radar system uses a differential detection method for removing any signals from background objects and uses a tunable FIR filtering in signal processing for detecting multiple targets. The differential detection method enables the correct detection of both the distance and small displacement at the same time for each target at the FM-CW radar according to the received signals. The basic performance of the FM-CW radar system is analyzed by computer simulation, and the distance and small displacement of a single target are measured in field experiments. The computer simulations are carried out for evaluating the proposed detection method with tunable FIR filtering for the FM-CW radar and for analyzing the performance according to the parameters in a closed multiple targets environment. The results of simulation show that our 24 GHz band FM-CW radar with the proposed detection method can effectively detect both the distance and the small displacement for each target in multiple moving targets environments. Moreover, we develop an IoT-based application for monitoring several targets at the same time in actual scenes.

BibTex:

    @Article{OJIOT_2016v2i1n02_Yamaguchi,
        title     = {A 24 GHz FM-CW Radar System for Detecting Closed Multiple Targets and Its Applications in Actual Scenes},
        author    = {Kazuhiro Yamaguchi and
                     Mitumasa Saito and
                     Takuya Akiyama and
                     Tomohiro Kobayashi and
                     Naoki Ginoza and
                     Hideaki Matsue},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {1--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704245003},
        urn       = {urn:nbn:de:101:1-201704245003},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper develops a 24 GHz band FM-CW radar system to detect closed multiple targets in a small displacement environment, and its performance is analyzed by computer simulation. The FM-CW radar system uses a differential detection method for removing any signals from background objects and uses a tunable FIR filtering in signal processing for detecting multiple targets. The differential detection method enables the correct detection of both the distance and small displacement at the same time for each target at the FM-CW radar according to the received signals. The basic performance of the FM-CW radar system is analyzed by computer simulation, and the distance and small displacement of a single target are measured in field experiments. The computer simulations are carried out for evaluating the proposed detection method with tunable FIR filtering for the FM-CW radar and for analyzing the performance according to the parameters in a closed multiple targets environment. The results of simulation show that our 24 GHz band FM-CW radar with the proposed detection method can effectively detect both the distance and the small displacement for each target in multiple moving targets environments. Moreover, we develop an IoT-based application for monitoring several targets at the same time in actual scenes.}
    }
1 citation in 2017:

Measurement of golf ball’s speed and launch angle using 24 GHz FM-CW doppler radar system

Jun-Young Ko, Kyeong-Rok Kim, Sung-Hyung Lee, Jin-Ki Kim, So-Yi Jung, Jae-Hyun Ki

Pages 123-124, 2017.

 Open Access 

Hierarchical Multi-Label Classification Using Web Reasoning for Large Datasets

Rafael Peixoto, Thomas Hassan, Christophe Cruz, Aurélie Bertaux, Nuno Silva

Open Journal of Semantic Web (OJSW), 3(1), Pages 1-15, 2016, Downloads: 7702, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194907 | GNL-LP: 113236129X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Extracting valuable data among large volumes of data is one of the main challenges in Big Data. In this paper, a Hierarchical Multi-Label Classification process called Semantic HMC is presented. This process aims to extract valuable data from very large data sources, by automatically learning a label hierarchy and classifying data items.The Semantic HMC process is composed of five scalable steps, namely Indexation, Vectorization, Hierarchization, Resolution and Realization. The first three steps construct automatically a label hierarchy from statistical analysis of data. This paper focuses on the last two steps which perform item classification according to the label hierarchy. The process is implemented as a scalable and distributed application, and deployed on a Big Data platform. A quality evaluation is described, which compares the approach with multi-label classification algorithms from the state of the art dedicated to the same goal. The Semantic HMC approach outperforms state of the art approaches in some areas.

BibTex:

    @Article{OJSW_2016v3i1n01_Peixoto,
        title     = {Hierarchical Multi-Label Classification Using Web Reasoning for Large Datasets},
        author    = {Rafael Peixoto and
                     Thomas Hassan and
                     Christophe Cruz and
                     Aur\~{A}lie Bertaux and
                     Nuno Silva},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {1--15},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194907},
        urn       = {urn:nbn:de:101:1-201705194907},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Extracting valuable data among large volumes of data is one of the main challenges in Big Data. In this paper, a Hierarchical Multi-Label Classification process called Semantic HMC is presented. This process aims to extract valuable data from very large data sources, by automatically learning a label hierarchy and classifying data items.The Semantic HMC process is composed of five scalable steps, namely Indexation, Vectorization, Hierarchization, Resolution and Realization. The first three steps construct automatically a label hierarchy from statistical analysis of data. This paper focuses on the last two steps which perform item classification according to the label hierarchy. The process is implemented as a scalable and distributed application, and deployed on a Big Data platform. A quality evaluation is described, which compares the approach with multi-label classification algorithms from the state of the art dedicated to the same goal. The Semantic HMC approach outperforms state of the art approaches in some areas.}
    }
2 citations in 2017:

Assessing and Improving Domain Knowledge Representation in DBpedia

Ludovic Font, Amal Zouaq, Michel Gagnon

Open Journal of Semantic Web (OJSW), 4(1), Pages 1-19, 2017.

Classification Hiérarchique Multi-Etiquette de Larges Volumes de Données par Raisonnement Sémantique

Thomas Hassan, Rafael Peixoto, Christophe Cruz, Aurélie Bertaux

In 14ème édition de l'atelier Fouille de Données Complexesn à Grenoble (EGC), 2017.

 Open Access 

A Semantic Question Answering Framework for Large Data Sets

Marta Tatu, Mithun Balakrishna, Steven Werner, Tatiana Erekhinskaya, Dan Moldovan

Open Journal of Semantic Web (OJSW), 3(1), Pages 16-31, 2016, Downloads: 13661, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194921 | GNL-LP: 1132361338 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.

BibTex:

    @Article{OJSW_2016v3i1n02_Tatu,
        title     = {A Semantic Question Answering Framework for Large Data Sets},
        author    = {Marta Tatu and
                     Mithun Balakrishna and
                     Steven Werner and
                     Tatiana Erekhinskaya and
                     Dan Moldovan},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {16--31},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194921},
        urn       = {urn:nbn:de:101:1-201705194921},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Traditionally, the task of answering natural language questions has involved a keyword-based document retrieval step, followed by in-depth processing of candidate answer documents and paragraphs. This post-processing uses semantics to various degrees. In this article, we describe a purely semantic question answering (QA) framework for large document collections. Our high-precision approach transforms the semantic knowledge extracted from natural language texts into a language-agnostic RDF representation and indexes it into a scalable triplestore. In order to facilitate easy access to the information stored in the RDF semantic index, a user's natural language questions are translated into SPARQL queries that return precise answers back to the user. The robustness of this framework is ensured by the natural language reasoning performed on the RDF store, by the query relaxation procedures, and the answer ranking techniques. The improvements in performance over a regular free text search index-based question answering engine prove that QA systems can benefit greatly from the addition and consumption of deep semantic information.}
    }
3 citations in 2017:

Assessing and Improving Domain Knowledge Representation in DBpedia

Ludovic Font, Amal Zouaq, Michel Gagnon

Open Journal of Semantic Web (OJSW), 4(1), Pages 1-19, 2017.

A web-based system architecture for ontology-based data integration in the domain of IT benchmarking

Matthias Pfaff, Helmut Krcmar

Enterprise Information Systems, Pages 1-23, 2017.

Semantic query graph based SPARQL generation from natural language questions

Shengli Song, Wen Huang, Yulong Sun

Cluster Computing, Pages 1-12, 2017.

 Open Access 

OnGIS: Semantic Query Broker for Heterogeneous Geospatial Data Sources

Marek Smid, Petr Kremen

Open Journal of Semantic Web (OJSW), 3(1), Pages 32-50, 2016, Downloads: 6216, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194936 | GNL-LP: 1132361346 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Querying geospatial data from multiple heterogeneous sources backed by different management technologies poses an interesting problem in the data integration and in the subsequent result interpretation. This paper proposes broker techniques for answering a user's complex spatial query: finding relevant data sources (from a catalogue of data sources) capable of answering the query, eventually splitting the query and finding relevant data sources for the query parts, when no single source suffices. For the purpose, we describe each source with a set of prototypical queries that are algorithmically arranged into a lattice, which makes searching efficient. The proposed algorithms leverage GeoSPARQL query containment enhanced with OWL 2 QL semantics. A prototype is implemented in a system called OnGIS.

BibTex:

    @Article{OJSW_2016v3i1n03_Smid,
        title     = {OnGIS: Semantic Query Broker for Heterogeneous Geospatial Data Sources},
        author    = {Marek Smid and
                     Petr Kremen},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {32--50},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194936},
        urn       = {urn:nbn:de:101:1-201705194936},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Querying geospatial data from multiple heterogeneous sources backed by different management technologies poses an interesting problem in the data integration and in the subsequent result interpretation. This paper proposes broker techniques for answering a user's complex spatial query: finding relevant data sources (from a catalogue of data sources) capable of answering the query, eventually splitting the query and finding relevant data sources for the query parts, when no single source suffices. For the purpose, we describe each source with a set of prototypical queries that are algorithmically arranged into a lattice, which makes searching efficient. The proposed algorithms leverage GeoSPARQL query containment enhanced with OWL 2 QL semantics. A prototype is implemented in a system called OnGIS.}
    }
1 citation in 2017:

An Automatic Matcher and Linker for Transportation Datasets

Ali Masri, Karine Zeitouni, Zoubida Kedad, Bertrand Leroy

ISPRS International Journal of Geo-Information, 6(1), Pages 1-20, 2017.

 Open Access 

Conformance of Social Media as Barometer of Public Engagement

Songchun Moon

Open Journal of Big Data (OJBD), 2(1), Pages 1-10, 2016, Downloads: 6346

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194393 | GNL-LP: 1132360560 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: There have been continuously a number of expectations: Social media may play a role of indicator that shows the degree of engagement and preference of choices of users toward music or movies. However, finding appropriate software tools in the market to verify this sort of expectation is too costly and complicated in their natures, and this causes a number of difficulties to attempt technical experimentation. A convenient and easy tool to facilitate such experimentation was developed in this study and was used successfully for performing various measurements with regard to user engagement in music and movies.

BibTex:

    @Article{OJBD_2016v2i101_Moon,
        title     = {Conformance of Social Media as Barometer of Public Engagement},
        author    = {Songchun Moon},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {1--10},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194393},
        urn       = {urn:nbn:de:101:1-201705194393},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {There have been continuously a number of expectations: Social media may play a role of indicator that shows the degree of engagement and preference of choices of users toward music or movies. However, finding appropriate software tools in the market to verify this sort of expectation is too costly and complicated in their natures, and this causes a number of difficulties to attempt technical experimentation. A convenient and easy tool to facilitate such experimentation was developed in this study and was used successfully for performing various measurements with regard to user engagement in music and movies.}
    }
0 citations in 2017

 Open Access 

XML-based Execution Plan Format (XEP)

Christoph Koch

Open Journal of Databases (OJDB), 3(1), Pages 42-52, 2016, Downloads: 6204

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194654 | GNL-LP: 1132360919 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Execution plan analysis is one of the most common SQL tuning tasks performed by relational database administrators and developers. Currently each database management system (DBMS) provides its own execution plan format, which supports system-specific details for execution plans and contains inherent plan operators. This makes SQL tuning a challenging issue. Firstly, administrators and developers often work with more than one DBMS and thus have to rethink among different plan formats. In addition, the analysis tools of execution plans only support single DBMSs, or they have to implement separate logic to handle each specific plan format of different DBMSs. To address these problems, this paper proposes an XML-based Execution Plan format (XEP), aiming to standardize the representation of execution plans of relational DBMSs. Two approaches are developed for transforming DBMS-specific execution plans into XEP format. They have been successfully evaluated for IBM DB2, Oracle Database and Microsoft SQL.

BibTex:

    @Article{OJDB_2016v3i1n03_Koch,
        title     = {XML-based Execution Plan Format (XEP)},
        author    = {Christoph Koch},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {42--52},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194654},
        urn       = {urn:nbn:de:101:1-201705194654},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Execution plan analysis is one of the most common SQL tuning tasks performed by relational database administrators and developers. Currently each database management system (DBMS) provides its own execution plan format, which supports system-specific details for execution plans and contains inherent plan operators. This makes SQL tuning a challenging issue. Firstly, administrators and developers often work with more than one DBMS and thus have to rethink among different plan formats. In addition, the analysis tools of execution plans only support single DBMSs, or they have to implement separate logic to handle each specific plan format of different DBMSs. To address these problems, this paper proposes an XML-based Execution Plan format (XEP), aiming to standardize the representation of execution plans of relational DBMSs. Two approaches are developed for transforming DBMS-specific execution plans into XEP format. They have been successfully evaluated for IBM DB2, Oracle Database and Microsoft SQL.}
    }
0 citations in 2017

 Open Access 

Doing More with the Dew: A New Approach to Cloud-Dew Architecture

David Edward Fisher, Shuhui Yang

Open Journal of Cloud Computing (OJCC), 3(1), Pages 8-19, 2016, Downloads: 10396, Citations: 9

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194535 | GNL-LP: 1132360773 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: While the popularity of cloud computing is exploding, a new network computing paradigm is just beginning. In this paper, we examine this exciting area of research known as dew computing and propose a new design of cloud-dew architecture. Instead of hosting only one dew server on a user's PC - as adopted in the current dewsite application - our design promotes the hosting of multiple dew servers instead, one for each installed domain. Our design intends to improve upon existing cloud-dew architecture by providing significantly increased freedom in dewsite development, while also automating the chore of managing dewsite content based on the user's interests and browsing habits. Other noteworthy benefits, all at no added cost to dewsite users, are briefly explored as well.

BibTex:

    @Article{OJCC_2016v3i1n02_Fisher,
        title     = {Doing More with the Dew: A New Approach to Cloud-Dew Architecture},
        author    = {David Edward Fisher and
                     Shuhui Yang},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2016},
        volume    = {3},
        number    = {1},
        pages     = {8--19},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194535},
        urn       = {urn:nbn:de:101:1-201705194535},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {While the popularity of cloud computing is exploding, a new network computing paradigm is just beginning. In this paper, we examine this exciting area of research known as dew computing and propose a new design of cloud-dew architecture. Instead of hosting only one dew server on a user's PC - as adopted in the current dewsite application - our design promotes the hosting of multiple dew servers instead, one for each installed domain. Our design intends to improve upon existing cloud-dew architecture by providing significantly increased freedom in dewsite development, while also automating the chore of managing dewsite content based on the user's interests and browsing habits. Other noteworthy benefits, all at no added cost to dewsite users, are briefly explored as well.}
    }
3 citations in 2017:

The Interdependent Part of Cloud Computing:Dew Computing

Hiral M. Patel, Rupal R. Chaudhari, Kinjal R. Prajapati, Ami A. Patel

In Intelligent Communication and Computational Technologies: Proceedings of Internet of Things for Technological Development (IoT4TD), Pages 1-9, 2017.

Dew Computing and Transition of Internet Computing Paradigms

Yingwei Wang, Karolj Skala, Andy Rindos, Marjan Gusev, Shuhui Yang, Yi Pan

ZTE Communications, 15(4), 2017.

A novel approach for securely processing information on dew sites (Dew computing) in collaboration with cloud computing: An approach toward latest research trends on Dew computing

H. Patel, K. Suthar

In Nirma University International Conference on Engineering (NUiCONE), Pages 1-6, 2017.

 Open Access 

Controlled Components for Internet of Things As-A-Service

Tatiana Aubonnet, Amina Boubendir, Frédéric Lemoine, Nöemie Simoni

Open Journal of Internet Of Things (OJIOT), 2(1), Pages 16-33, 2016, Downloads: 6495, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201704244995 | GNL-LP: 1130623629 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: In order to facilitate developers willing to create future Internet of Things (IoT) services incorporating the nonfunctional aspects, we introduce an approach and an environment based on controlled components. Our approach allows developers to design an IoT "as-a-service", to build the service composition and to manage it. This is important, because the IoT allows us to observe and understand the real world in order to have decision-making information to act on reality. It is important to make sure that all these components work according to their mission, i.e. their Quality of Service (QoS) contract. Our environment provides the modeling, generates Architecture Description Language (ADL) formats, and uses them in the implementation phase on an open-source platform.

BibTex:

    @Article{OJIOT-2016v2i1n02_Aubonnet,
        title     = {Controlled Components for Internet of Things As-A-Service},
        author    = {Tatiana Aubonnet and
                     Amina Boubendir and
                     Fr\~{A}d\~{A}ric Lemoine and
                     N\~{A}emie Simoni},
        journal   = {Open Journal of Internet Of Things (OJIOT)},
        issn      = {2364-7108},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {16--33},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201704244995},
        urn       = {urn:nbn:de:101:1-201704244995},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {In order to facilitate developers willing to create future Internet of Things (IoT) services incorporating the nonfunctional aspects, we introduce an approach and an environment based on controlled components. Our approach allows developers to design an IoT "as-a-service", to build the service composition and to manage it. This is important, because the IoT allows us to observe and understand the real world in order to have decision-making information to act on reality. It is important to make sure that all these components work according to their mission, i.e. their Quality of Service (QoS) contract. Our environment provides the modeling, generates Architecture Description Language (ADL) formats, and uses them in the implementation phase on an open-source platform.}
    }
1 citation in 2017:

Composants autocontrôlés pour les services d'interaction homme-machine

Frédéric Lemoine, Tatiana Aubonnet,, Noëmie Simoni

In 29ème conférence francophone sur l'Interaction Homme-Machine (IHM), Pages 233-242, 2017.

 Open Access 

Constructing Large-Scale Semantic Web Indices for the Six RDF Collation Orders

Sven Groppe, Dennis Heinrich, Christopher Blochwitz, Thilo Pionteck

Open Journal of Big Data (OJBD), 2(1), Pages 11-25, 2016, Downloads: 6397, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194418 | GNL-LP: 1132360587 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The Semantic Web community collects masses of valuable and publicly available RDF data in order to drive the success story of the Semantic Web. Efficient processing of these datasets requires their indexing. Semantic Web indices make use of the simple data model of RDF: The basic concept of RDF is the triple, which hence has only 6 different collation orders. On the one hand having 6 collation orders indexed fast merge joins (consuming the sorted input of the indices) can be applied as much as possible during query processing. On the other hand constructing the indices for 6 different collation orders is very time-consuming for large-scale datasets. Hence the focus of this paper is the efficient Semantic Web index construction for large-scale datasets on today's multi-core computers. We complete our discussion with a comprehensive performance evaluation, where our approach efficiently constructs the indices of over 1 billion triples of real world data.

BibTex:

    @Article{OJBD_2016v2i1n02_Groppe,
        title     = {Constructing Large-Scale Semantic Web Indices for the Six RDF Collation Orders},
        author    = {Sven Groppe and
                     Dennis Heinrich and
                     Christopher Blochwitz and
                     Thilo Pionteck},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {11--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194418},
        urn       = {urn:nbn:de:101:1-201705194418},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The Semantic Web community collects masses of valuable and publicly available RDF data in order to drive the success story of the Semantic Web. Efficient processing of these datasets requires their indexing. Semantic Web indices make use of the simple data model of RDF: The basic concept of RDF is the triple, which hence has only 6 different collation orders. On the one hand having 6 collation orders indexed fast merge joins (consuming the sorted input of the indices) can be applied as much as possible during query processing. On the other hand constructing the indices for 6 different collation orders is very time-consuming for large-scale datasets. Hence the focus of this paper is the efficient Semantic Web index construction for large-scale datasets on today's multi-core computers. We complete our discussion with a comprehensive performance evaluation, where our approach efficiently constructs the indices of over 1 billion triples of real world data.}
    }
1 citation in 2017:

A Semantic Safety Check System for Emergency Management

Yogesh Pandey, Srividya K. Bansal

Open Journal of Semantic Web (OJSW), 4(1), Pages 35-50, 2017.

 Open Access 

New Areas of Contributions and New Addition of Security

Victor Chang

Open Journal of Big Data (OJBD), 2(1), Pages 26-28, 2016, Downloads: 4358

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194405 | GNL-LP: 1132360579 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Open Journal of Big Data (OJBD) (www.ronpub.com/ojbd) is an open access journal, which addresses the aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents two articles published in the first issue of the second volume of OJBD. The first article is about the investigation of social media for the public engagement. The second article looks into large-scale semantic web indices for six RDF collation orders. OJBD has an increasingly improved reputation thanks to the support of research communities. We will set up the Second International Conference on Internet of Things, Big Data and Security (IoTBDS 2017), in Porto, Portugal, between 24 and 26 April 2017. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.

BibTex:

    @Article{OJBD_2016v2i1n03e_Chang,
        title     = {New Areas of Contributions and New Addition of Security},
        author    = {Victor Chang},
        journal   = {Open Journal of Big Data (OJBD)},
        issn      = {2365-029X},
        year      = {2016},
        volume    = {2},
        number    = {1},
        pages     = {26--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194405},
        urn       = {urn:nbn:de:101:1-201705194405},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Open Journal of Big Data (OJBD) (www.ronpub.com/ojbd) is an open access journal, which addresses the aspects of Big Data, including new methodologies, processes, case studies, poofs-of-concept, scientific demonstrations, industrial applications and adoption. This editorial presents two articles published in the first issue of the second volume of OJBD. The first article is about the investigation of social media for the public engagement. The second article looks into large-scale semantic web indices for six RDF collation orders. OJBD has an increasingly improved reputation thanks to the support of research communities. We will set up the Second International Conference on Internet of Things, Big Data and Security (IoTBDS 2017), in Porto, Portugal, between 24 and 26 April 2017. OJBD is published by RonPub (www.ronpub.com), which is an academic publisher of online, open access, peer-reviewed journals.}
    }
0 citations in 2017