RonPub

Loading...

RonPub Banner

RonPub -- Research Online Publishing

RonPub (Research online Publishing) is an academic publisher of online, open access, peer-reviewed journals.  RonPub aims to provide a platform for researchers, developers, educators, and technical managers to share and exchange their research results worldwide.

RonPub Is Open Access:

RonPub publishes all of its journals under the open access model, defined under BudapestBerlin, and Bethesda open access declarations:

  • All articles published by RonPub is fully open access and online available to readers free of charge.  
  • All open access articles are distributed under  Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction free of charge in any medium, provided that the original work is properly cited. 
  • Authors retain all copyright to their work.
  • Authors may also publish the publisher's version of their paper on any repository or website. 

RonPub Is Cost-Effective:

To be able to provide open access journals, RonPub defray publishing cost by charging a one-time publication fee for each accepted article. One of RonPub objectives is providing a fast and high-quality but lower-cost publishing service. In order to ensure that the fee is never a barrier to publication, RonPub offers a fee waiver for authors who do not have funds to cover publication fees. We also offer a partial fee waiver for editors and reviewers of RonPub as as reward for their work. See the respective Journal webpage for the concrete publication fee.

RonPub Publication Criteria

What we are most concerned about is the quality, not quantity, of publications. We only publish high-quality scholarly papers. Publication Criteria describes the criteria that should be met for a contribution to be acceptable for publication in RonPub journals.

RonPub Publication Ethics Statement:

In order to ensure the publishing quality and the reputation of the publisher, it is important that all parties involved in the act of publishing adhere to the standards of the publishing ethical behaviour. To verify the originality of submissions, we use Plagiarism Detection Tools, like Anti-Plagiarism, PaperRater, Viper, to check the content of manuscripts submitted to our journals against existing publications.

RonPub follows the Code of Conduct of the Committee on Publication Ethics (COPE), and deals with the cases of misconduct according to the COPE Flowcharts

Long-Term Preservation in the German National Library

Our publications are archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete.

Where is RonPub?

RonPub is a registered corporation in Lübeck, Germany. Lübeck is a beautiful coastal city, owing wonderful sea resorts and sandy beaches as well as good restaurants. It is located in northern Germany and is 60 kilometer away from Hamburg.

For Authors

Manuscript Preparation

Authors should first read the author guidelines of the corresponding journal. Manuscripts must be prepared using the manuscript template of the respective journal. It is available as word and latex version for download at the Author Guidelines of the corresponding journal page. The template describes the format and structure of manuscripts and other necessary information for preparing manuscripts. Manuscripts should be written in English. There is no restriction on the length of manuscripts.

Submission

Authors submit their manuscripts via the submit page of the corresponding journal. Authors first submit their manuscripts in PDF format. Once a manuscript is accepted, the author then submits the revised manuscript as PDF file and word file or latex folder (with all the material necessary to generate the PDF file). The work described in the submitted manuscript must be previously unpublished; it is not under consideration for publication anywhere else. 

Authors are welcome to suggest qualified reviewers for their papers, but this is not mandatory. If the author wants to do so, please provide the name, affiliations and e-mail addresses for all suggested reviewers.

Manuscript Status

After submission of manuscripts, authors will receive an email to confirm receipt of manuscripts within a few days. Subsequent enquiries concerning paper progress should be made to the corresponding editorial office (see individual journal webpage for concrete contact information).

Review Procedure

RonPub is committed to enforcing a rigorous peer-review process. All manuscripts submitted for publication in RonPub journals are strictly and thoroughly peer-reviewed. When a manuscript is submitted to a RonPub journal, the editor-in-chief of the journal assigns it to an appropriate editor who will be in charge of the review process of the manuscript. The editor first suggests potential reviewers and then organizes the peer-reviewing herself/himself or entrusts it to the editor office. For each manuscript, typically three review reports will be collected. The editor and the editor-in-chief evaluate the manuscript itself and the review reports and make an accept/revision/reject decision. Authors will be informed with the decision and reviewing results within 6-8 weeks on average after the manuscript submission. In the case of revision, authors are required to perform an adequate revision to address the concerns from evaluation reports. A new round of peer-review will be performed if necessary.

Accepted manuscripts are published online immediately.

Copyrights

Authors publishing with RonPub open journals retain the copyright to their work. 

All articles published by RonPub is fully open access and online available to readers free of charge.  RonPub publishes all open access articles under the Creative Commons Attribution License,  which permits unrestricted use, distribution and reproduction freely, provided that the original work is properly cited.

Digital Archiving Policy

Our publications have been archived and permanently-preserved in the German National Library. The publications, which are archived in the German National Library, are not only long-term preserved but also accessible in the future, because the German National Library ensures that digital data saved in the old formats can be viewed and used on current computer systems in the same way they were on the original systems which are long obsolete. Further measures will be taken if necessary. Furthermore, we also encourage our authors to self-archive their articles published on the website of RonPub.

For Editors

About RonPub

RonPub is academic publisher of online, open access, peer-reviewed journals. All articles published by RonPub is fully open access and online available to readers free of charge.

RonPub is located in Lübeck, Germany. Lübeck is a beautiful harbour city, 60 kilometer away from Hamburg.

Editor-in-Chief Responsibilities

The Editor-in-Chief of each journal is mainly responsible for the scientific quality of the journal and for assisting in the management of the journal. The Editor-in-Chief suggests topics for the journal, invites distinguished scientists to join the editorial board, oversees the editorial process, and makes the final decision whether a paper can be published after peer-review and revisions.

As a reward for the work of a Editor-in-Chief, the Editor-in-Chief will obtain a 25% discount of the standard publication fee for her/his papers (the Editor-in-Chief is one of authors) published in any of RonPub journals.

Editors’ Responsibilities

Editors assist the Editor-in-Chief in the scientific quality and in decision about topics of the journal. Editors are also encouraged to help to promote the journal among their peers and at conferences. An editor invites at least three reviewers to review a manuscript, but may also review him-/herself the manuscript. After carefully evaluating the review reports and the manuscript itself, the editor makes a commendation about the status of the manuscript. The editor's evaluation as well as the review reports are then sent to EiC, who make the final decision whether a paper can be published after peer-review and revisions. 

The communication with Editorial Board members is done primarily by E-mail, and the Editors are expected to respond within a few working days on any question sent by the Editorial Office so that manuscripts can be processed in a timely fashion. If an editor does not respond or cannot process the work in time, and under some special situations, the editorial office may forward the requests to the Publishers or Editor-in-Chief, who will take the decision directly.

As a reward for the work of editors, an editor will obtain a 25% discount of the standard publication fee for her/his papers (the editor is one of authors) published in any of RonPub journals.

Guest Editors’ Responsibilities

Guest Editors are responsible of the scientific quality of their special issues. Guest Editors will be in charge of inviting papers, of supervising the refereeing process (each paper should be reviewed at least by three reviewers), and of making decisions on the acceptance of manuscripts submitted to their special issue. As regular issues, all accepted papers by (guest) editors will be sent to the EiC of the journal, who will check the quality of the papers, and make the final decsion whether a paper can be published.

Our editorial office will have the right directly asking authors to revise their paper if there are quality issues, e.g. weak quality of writing, and missing information. Authors are required to revise their paper several times if necessary. A paper accepted by it's quest editor may be rejected by the EiC of the journal due to a low quality. However, this occurs only when authors do not really take efforts to revise their paper. A high-quality publication needs the common efforts from the journal, reviewers, editors, editor-in-chief and authors.

The Guest Editors are also expected to write an editorial paper for the special issue. As a reward for work, all guest editors and reviewers working on a special issue will obtain a 25% discount of the standard publication fee for any of their papers published in any of RonPub journals for one year.

Reviewers’ Responsiblity

A reviewer is mainly responsible for reviewing of manuscripts, writing reviewing report and suggesting acception or deny of manuscripts. Reviews are encouraged to provide input about the quality and management of the journal, and help promote the journal among their peers and at conferences.  

Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member. 

As a reward for the reviewing work, a reviewer will obtain a 25% discount of the standard publication fee for her/his papers (the review is one of authors) published in any of RonPub journals.

Launching New Journals

RonPub always welcomes suggestions for new open access journals in any research area. We are also open for publishing collaborations with research societies. Please send your proposals for new journals or for publishing collaboration to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Publication Criteria

This part provides important information for both the scientific committees and authors.

Ethic Requirement:

For scientific committees: Each editor and reviewer should conduct the evaluation of manuscripts objectively and fairly.
For authors: Authors should present their work honestly without fabrication, falsification, plagiarism or inappropriate data manipulation.

Pre-Check:

In order to filter fabricated submissions, the editorial office will check the authenticity of the authors and their affiliations before a peer-review begins. It is important that the authors communicate with us using the email addresses of their affiliations and provide us the URL addresses of their affiliations. To verify the originality of submissions, we use various plagiarism detection tools to check the content of manuscripts submitted to our journal against existing publications. The overall quality of paper will be also checked including format, figures, tables, integrity and adequacy. Authors may be required to improve the quality of their paper before sending it out for review. If a paper is obviously of low quality, the paper will be directly rejected.

Acceptance Criteria:

The criteria for acceptance of manuscripts are the quality of work. This will concretely be reflected in the following aspects:

  • Novelty and Practical Impact
  • Technical Soundness
  • Appropriateness and Adequacy of 
    • Literature Review
    • Background Discussion
    • Analysis of Issues
  • Presentation, including 
    • Overall Organization 
    • English 
    • Readability

For a contribution to be acceptable for publication, these points should be at least in middle level.

Guidelines for Rejection:

  • If the work described in the manuscript has been published, or is under consideration for publication anywhere else, it will not be evaluated.
  • If the work is a plagiarism, or contains data falsification or fabrication, it will be rejected.
  • Manuscripts, which have seriously technical flaws, will not be accepted.

Call for Journals

Research Online Publishing (RonPub, www.ronpub.com) is a publisher of online, open access and peer-reviewed scientific journals.  For more information about RonPub please visit this link.

RonPub always welcomes suggestions for new journals in any research area. Please send your proposals for journals along with your Curriculum Vitae to This email address is being protected from spambots. You need JavaScript enabled to view it. .

We are also open for publishing collaborations with research societies. Please send your publishing collaboration also to This email address is being protected from spambots. You need JavaScript enabled to view it. .

Be an Editor / Be a Reviewer

RonPub always welcomes qualified academicians and practitioners to join as editors and reviewers. Being an editor/a reviewer is a matter of prestige and personnel achievement. Upon the quality of reviewing work, a reviewer will have the potential to be promoted to a full editorial board member.

If you would like to participate as a scientific committee member of any of RonPub journals, please send an email to This email address is being protected from spambots. You need JavaScript enabled to view it. with your curriculum vitae. We will revert back as soon as possible. For more information about editors/reviewers, please visit this link.

Contact RonPub

Location

RonPub UG (haftungsbeschränkt)
Hiddenseering 30
23560 Lübeck
Germany

Comments and Questions

For general inquiries, please e-mail to This email address is being protected from spambots. You need JavaScript enabled to view it. .

For specific questions on a certain journal, please visit the corresponding journal page to see the email address.

RonPub's Transparent Impact Factor of the Year 2015: 2.62

There are numerous criticisms on the use of impact factors and debates about the validity of the impact factor as a measure of journal importance [1, 2, 3, 5, 6, 8, 9]. Several national-level institutions like the German Research Foundation [4] and Science and the Technology Select Committee [7] of the United Kingdom urge their funding councils to only evaluate the quality of individual articles, not the reputation of the journal in which they are published. Nevertherless, we are sometimes asked about the impact factors of our journals. Therefore, we provide here the impact factors for readers who are still interested in impact factors. Our impact factors are calculated in the same way as the one of Thomson Reuters, but the impact factors for our journals are not computed by the company Thomson Reuters and they are computed by ourselves and can be validated by anyone, because we present all data for computing the impact factor (to anyone asking neither for registration nor for fees). These data are provided here and each reader can re-compute and check the calculation of these impact factors. Therefore, we call our impact factor Transparent Impact Factor.

For the calculation of the Impact Factor of an year Y we need the number A of articles published in the years Y-1 and Y-2 (excluding editorials). Furthemore, we determine the number of citations B in the year Y, which cite articles of RonPub published in the years Y-1 or Y-2. The (2-Years) Transparent Impact Factor is then determined by B/A.

There are A := 21 articles published in the years 2013 and 2014. These articles received B := 55 citations in scientific contributions published in 2015. These citations are listed below.

Therefore, the (2-Years) Transparent Impact Factor for the year 2015 is B/A = 2.62

References

  1. Björn Brembs, Katherine Button and Marcus Munafò. Deep impact: Unintended consequences of journal rank. Frontiers in Human Neuroscience, 7 (291): 1–12, 2013.
  2. Ewen Callaway. Beat it, impact factor! Publishing elite turns against controversial metric. Nature, 535 (7611): 210–211, 2016.
  3. Masood Fooladi, Hadi Salehi, Melor Md Yunus, Maryam Farhadi, Arezoo Aghaei Chadegani, Hadi Farhadi, Nader Ale Ebrahim. Does Criticisms Overcome the Praises of Journal Impact Factor? Asian Social Science, 9 (5), 2013.
  4. German Research Foundation, "Quality not Quantity" – DFG Adopts Rules to Counter the Flood of Publications in Research, Press Release No. 7, 2010.
  5. Khaled Moustafa. The disaster of the impact factor. Science and Engineering Ethics, 21 (1): 139–142, 2015.
  6. Mike Rossner, Heather Van Epps, Emma Hill. Show me the data. Journal of Cell Biology, 179 (6): 1091–2, 2007.
  7. Science and Technology Committee, Scientific Publications: Free for all? Tenth Report of the Science and Technology Committee of the House of Commons, 2004.
  8. Maarten van Wesel. Evaluation by Citation: Trends in Publication Behavior, Evaluation Criteria, and the Strive for High Impact Publications. Science and Engineering Ethics, 22 (1): 199–225, 2016.
  9. Time to remodel the journal impact factor. Nature, 535 (466), 2016.

Citations

This list of citations may not be complete. Please contact us, if citations are missing. There might be errors in the citation data due to automatic processing.

 Open Access 

An Introductory Approach to Risk Visualization as a Service

Victor Chang

Open Journal of Cloud Computing (OJCC), 1(1), Pages 1-9, 2014, Downloads: 10019, Citations: 13

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194429 | GNL-LP: 1132360595 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper introduces the Risk Visualization as a Service (RVaaS) and presents the motivation, rationale, methodology, Cloud APIs used, operations and examples of using RVaaS. Risks can be calculated within seconds and presented in the form of Visualization to ensure that unexploited areas are ex-posed. RVaaS operates in two phases. The first phase includes the risk modeling in Black Scholes Model (BSM), creating 3D Visualization and Analysis. The second phase consists of calculating key derivatives such as Delta and Theta for financial modeling. Risks presented in visualization allow the potential investors and stakeholders to keep track of the status of risk with regard to time, prices and volatility. Our approach can improve accuracy and performance. Results in experiments show that RVaaS can perform up to 500,000 simulations and complete all simulations within 24 seconds for time steps of up to 50. We also introduce financial stock market analysis (FSMA) that can fully blend with RVaaS and demonstrate two examples that can help investors make better decision based on the pricing and market volatility information. RVaaS provides a structured way to deploy low cost, high quality risk assessment and support real-time calculations.

BibTex:

    @Article{OJCC-v1i1n01_Chang,
        title     = {An Introductory Approach to Risk Visualization as a Service},
        author    = {Victor Chang},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {1--9},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194429},
        urn       = {urn:nbn:de:101:1-201705194429},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper introduces the Risk Visualization as a Service (RVaaS) and presents the motivation, rationale, methodology, Cloud APIs used, operations and examples of using RVaaS. Risks can be calculated within seconds and presented in the form of Visualization to ensure that unexploited areas are ex-posed. RVaaS operates in two phases. The first phase includes the risk modeling in Black Scholes Model (BSM), creating 3D Visualization and Analysis. The second phase consists of calculating key derivatives such as Delta and Theta for financial modeling. Risks presented in visualization allow the potential investors and stakeholders to keep track of the status of risk with regard to time, prices and volatility. Our approach can improve accuracy and performance. Results in experiments show that RVaaS can perform up to 500,000 simulations and complete all simulations within 24 seconds for time steps of up to 50. We also introduce financial stock market analysis (FSMA) that can fully blend with RVaaS and demonstrate two examples that can help investors make better decision based on the pricing and market volatility information. RVaaS provides a structured way to deploy low cost, high quality risk assessment and support real-time calculations.}
    }
8 citations in 2015:

Cloud Computing and Frameworks for Organisational Cloud Adoption

Victor Chang, Robert John Walters, Gary B Wills

In Delivery and Adoption of Cloud Computing Services in Contemporary Organizations, 2015.

Benefits and Challenges for BPM in the Cloud.

Ute Riemann

International Journal of Organizational and Collective Intelligence (IJOCI), 5(1), Pages 32-61, 2015.

Cloud Computing: A Practical Overview Between Year 2009 and Year 2015.

Yulin Yao

International Journal of Organizational and Collective Intelligence (IJOCI), 5(3), Pages 32-43, 2015.

Analysis on cloud services on business processes in the digitalization of consumer products industry

Ute Riemann

In Delivery and adoption of cloud computing services in contemporary organizations, Pages 129-165, 2015.

Analyzing French and Italian iPhone 4S Mobile Cloud Customer Satisfaction Presented by Organizational Sustainability Modeling

Victor Chang

In Delivery and Adoption of Cloud Computing Services in Contemporary Organizations, Pages 81-99, 2015.

A Formal Framework for Cloud Systems

Zakaria Benzadri, Chafia Bouanaka, Faı̈za Belala

In Delivery and Adoption of Cloud Computing Services in Contemporary Organizations, 2015.

Emerging Software as a Service and Analytics

Victor Chang, Robert John Walters, Gary B. Wills

Open Journal of Cloud Computing (OJCC), 2(1), Pages 1-3, 2015.

Benefits and Challenges for Business Process Management in the Cloud

Ute Riemann

International Journal of Organizational and Collective Intelligence (IJOCI), 5(2), Pages 80-104, 2015.

 Open Access 

Block-level De-duplication with Encrypted Data

Pasquale Puzio, Refik Molva, Melek Önen, Sergio Loureiro

Open Journal of Cloud Computing (OJCC), 1(1), Pages 10-18, 2014, Downloads: 13130, Citations: 20

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194448 | GNL-LP: 1132360617 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Deduplication is a storage saving technique which has been adopted by many cloud storage providers such as Dropbox. The simple principle of deduplication is that duplicate data uploaded by different users are stored only once. Unfortunately, deduplication is not compatible with encryption. As a scheme that allows deduplication of encrypted data segments, we propose ClouDedup, a secure and efficient storage service which guarantees blocklevel deduplication and data confidentiality at the same time. ClouDedup strengthens convergent encryption by employing a component that implements an additional encryption operation and an access control mechanism. We also propose to introduce an additional component which is in charge of providing a key management system for data blocks together with the actual deduplication operation. We show that the overhead introduced by these new components is minimal and does not impact the overall storage and computational costs.

BibTex:

    @Article{OJCC-v1i1n02_Puzio,
        title     = {Block-level De-duplication with Encrypted Data},
        author    = {Pasquale Puzio and
                     Refik Molva and
                     Melek \~{A}nen and
                     Sergio Loureiro},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {10--18},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194448},
        urn       = {urn:nbn:de:101:1-201705194448},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Deduplication is a storage saving technique which has been adopted by many cloud storage providers such as Dropbox. The simple principle of deduplication is that duplicate data uploaded by different users are stored only once. Unfortunately, deduplication is not compatible with encryption. As a scheme that allows deduplication of encrypted data segments, we propose ClouDedup, a secure and efficient storage service which guarantees blocklevel deduplication and data confidentiality at the same time. ClouDedup strengthens convergent encryption by employing a component that implements an additional encryption operation and an access control mechanism. We also propose to introduce an additional component which is in charge of providing a key management system for data blocks together with the actual deduplication operation. We show that the overhead introduced by these new components is minimal and does not impact the overall storage and computational costs.}
    }
5 citations in 2015:

An efficient confidentiality-preserving Proof of Ownership for deduplication.

Lorena González-Manzano, Agustín Orfila

Journal of Network and Computer Applications, 50, Pages 49-59, 2015.

BDO-SD: An efficient scheme for big data outsourcing with secure deduplication.

Mi Wen, Kejie Lu, Jingsheng Lei, Fengyong Li, Jing Li

In Conference on Computer Communications Workshops, INFOCOM Workshops, Hong Kong, China, Pages 214-219, 2015.

Analysis of hybrid cloud approach for private cloud in the de-duplication mechanism

K. Saritha, S. Subasree

In International Conference on Engineering and Technology (ICETECH), Pages 1-3, 2015.

Emerging Software as a Service and Analytics

Victor Chang, Robert John Walters, Gary B. Wills

Open Journal of Cloud Computing (OJCC), 2(1), Pages 1-3, 2015.

Hybrid Model for Data Security in Cloud

Ogwueleka Francisca Nonyehem, Moses Timothy

IUP Journal of Information Technology, 11(3), Pages 7, 2015.

 Open Access 

Measuring and analyzing German and Spanish customer satisfaction of using the iPhone 4S Mobile Cloud service

Victor Chang

Open Journal of Cloud Computing (OJCC), 1(1), Pages 19-26, 2014, Downloads: 7752, Citations: 8

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194450 | GNL-LP: 1132360633 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper presents the customer satisfaction analysis for measuring popularity in the Mobile Cloud, which is an emerging area in the Cloud and Big Data Computing. Organizational Sustainability Modeling (OSM) is the proposed method used in this research. The twelve-month of German and Spanish consumer data are used for the analysis to investigate the return and risk status associated with the ratings of customer satisfaction in the iPhone 4S Mobile Cloud services. Results show that there is a decline in the satisfaction ratings in Germany and Spain due to economic downturn and competitions in the market, which support our hypothesis. Key outputs have been explained and they confirm that all analysis and interpretations fulfill the criteria for OSM. The use of statistical and visualization method proposed by OSM can expose unexploited data and allows the stakeholders to understand the status of return and risk of their Cloud strategies easier than the use of other data analysis.

BibTex:

    @Article{OJCC-v1i1n03_Chang,
        title     = {Measuring and analyzing German and Spanish customer satisfaction of using the iPhone 4S Mobile Cloud service},
        author    = {Victor Chang},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {19--26},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194450},
        urn       = {urn:nbn:de:101:1-201705194450},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper presents the customer satisfaction analysis for measuring popularity in the Mobile Cloud, which is an emerging area in the Cloud and Big Data Computing. Organizational Sustainability Modeling (OSM) is the proposed method used in this research. The twelve-month of German and Spanish consumer data are used for the analysis to investigate the return and risk status associated with the ratings of customer satisfaction in the iPhone 4S Mobile Cloud services. Results show that there is a decline in the satisfaction ratings in Germany and Spain due to economic downturn and competitions in the market, which support our hypothesis. Key outputs have been explained and they confirm that all analysis and interpretations fulfill the criteria for OSM. The use of statistical and visualization method proposed by OSM can expose unexploited data and allows the stakeholders to understand the status of return and risk of their Cloud strategies easier than the use of other data analysis.}
    }
5 citations in 2015:

The role of cloud computing adoption in global business

Kijpokin Kasemsap

In Delivery and adoption of cloud computing services in contemporary organizations, Pages 26-55, 2015.

Cloud Computing and Frameworks for Organisational Cloud Adoption

Victor Chang, Robert John Walters, Gary B Wills

In Delivery and Adoption of Cloud Computing Services in Contemporary Organizations, Pages 1-25, 2015.

Cloud Computing: A Practical Overview Between Year 2009 and Year 2015

Yulin Yao

International Journal of Organizational and Collective Intelligence (IJOCI), 5(3), Pages 32-43, 2015.

Analyzing French and Italian iPhone 4S Mobile Cloud Customer Satisfaction Presented by Organizational Sustainability Modeling

Victor Chang

In Delivery and Adoption of Cloud Computing Services in Contemporary Organizations, Pages 81-99, 2015.

Emerging Software as a Service and Analytics

Victor Chang, Robert John Walters, Gary B. Wills

Open Journal of Cloud Computing (OJCC), 2(1), Pages 1-3, 2015.

 Open Access 

Designing a Benchmark for the Assessment of Schema Matching Tools

Fabien Duchateau, Zohra Bellahsene

Open Journal of Databases (OJDB), 1(1), Pages 3-25, 2014, Downloads: 10716, Citations: 13

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194573 | GNL-LP: 1132360838 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Over the years, many schema matching approaches have been developed to discover correspondences between schemas. Although this task is crucial in data integration, its evaluation, both in terms of matching quality and time performance, is still manually performed. Indeed, there is no common platform which gathers a collection of schema matching datasets to fulfil this goal. Another problem deals with the measuring of the post-match effort, a human cost that schema matching approaches aim at reducing. Consequently, we propose XBenchMatch, a schema matching benchmark with available datasets and new measures to evaluate this manual post-match effort and the quality of integrated schemas. We finally report the results obtained by different approaches, namely COMA++, Similarity Flooding and YAM. We show that such a benchmark is required to understand the advantages and failures of schema matching approaches. Therefore, it could help an end-user to select a schema matching tool which covers his/her needs.

BibTex:

    @Article{OJDB-v1i1n02_Duchateau,
        title     = {Designing a Benchmark for the Assessment of Schema Matching Tools},
        author    = {Fabien Duchateau and
                     Zohra Bellahsene},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {3--25},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194573},
        urn       = {urn:nbn:de:101:1-201705194573},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Over the years, many schema matching approaches have been developed to discover correspondences between schemas. Although this task is crucial in data integration, its evaluation, both in terms of matching quality and time performance, is still manually performed. Indeed, there is no common platform which gathers a collection of schema matching datasets to fulfil this goal. Another problem deals with the measuring of the post-match effort, a human cost that schema matching approaches aim at reducing. Consequently, we propose XBenchMatch, a schema matching benchmark with available datasets and new measures to evaluate this manual post-match effort and the quality of integrated schemas. We finally report the results obtained by different approaches, namely COMA++, Similarity Flooding and YAM. We show that such a benchmark is required to understand the advantages and failures of schema matching approaches. Therefore, it could help an end-user to select a schema matching tool which covers his/her needs.}
    }
5 citations in 2015:

Holistic Statistical Open Data integration based on integer linear programming

Alain Berro, Imen Megdiche, Olivier Teste

In 9th IEEE International Conference on Research Challenges in Information Science, RCIS 2015, Athens, Greece, Pages 468-479, 2015.

A Linear Program for Holistic Matching: Assessment on Schema Matching Benchmark

Alain Berro, Imen Megdiche, Olivier Teste

In Database and Expert Systems Applications - 26th International Conference, DEXA 2015, Valencia, Spain, Proceedings, Part II, Pages 383-398, 2015.

Intégration holistique des graphes basée sur la programmation linéaire pour l'entreposage des Open Data

Alain Berro, Imen Megdiche-Bousarsar, Olivier Teste

In 11èmes journées francophones sur les Entrepôts de Données et l’Analyse en Ligne (EDA 2015), Pages 113-128, 2015.

Pabench: Designing a taxonomy and implementing a benchmark for spatial entity matching

Bilal Berjawi, Fabien Duchateau, Franck Favetta, Maryvonne Miquel, Robert Laurini

In The Seventh International Conference on Advanced Geographic Information Systems, Applications, and Services, Pages 7-16, 2015.

Intégration holistique et entreposage automatique des données ouvertes

Imen Megdiche

2015. Doctorat de l’Université de Toulouse

 Open Access 

Eventual Consistent Databases: State of the Art

Mawahib Musa Elbushra, Jan Lindström

Open Journal of Databases (OJDB), 1(1), Pages 26-41, 2014, Downloads: 20538, Citations: 15

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194582 | GNL-LP: 1132360846 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.

BibTex:

    @Article{OJDB-v1i1n03_Elbushra,
        title     = {Eventual Consistent Databases: State of the Art},
        author    = {Mawahib Musa Elbushra and
                     Jan Lindstr\~{A}m},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {26--41},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194582},
        urn       = {urn:nbn:de:101:1-201705194582},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {One of the challenges of cloud programming is to achieve the right balance between the availability and consistency in a distributed database. Cloud computing environments, particularly cloud databases, are rapidly increasing in importance, acceptance and usage in major applications, which need the partition-tolerance and availability for scalability purposes, but sacrifice the consistency side (CAP theorem). In these environments, the data accessed by users is stored in a highly available storage system, thus the use of paradigms such as eventual consistency became more widespread. In this paper, we review the state-of-the-art database systems using eventual consistency from both industry and research. Based on this review, we discuss the advantages and disadvantages of eventual consistency, and identify the future research challenges on the databases using eventual consistency.}
    }
3 citations in 2015:

Causal Consistent Databases

Mawahib Musa Elbushra, Jan Lindström

Open Journal of Databases (OJDB), 2(1), Pages 17-35, 2015.

The Pyrrho Book

Malcolm Crowe

2015.

Near Real-time Synchronization Approach for Heterogeneous Distributed Databases

Hassen Fadoua, Grissa Touzi Amel

In 7th International Conference on Advances in Databases, Knowledge, and Data Applications (DBKDA), Pages 107-113, 2015.

 Open Access 

Pattern-sensitive Time-series Anonymization and its Application to Energy-Consumption Data

Stephan Kessler, Erik Buchmann, Thorben Burghardt, Klemens Böhm

Open Journal of Information Systems (OJIS), 1(1), Pages 3-22, 2014, Downloads: 14020, Citations: 5

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194696 | GNL-LP: 113236096X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Time series anonymization is an important problem. One prominent example of time series are energy consumption records, which might reveal details of the daily routine of a household. Existing privacy approaches for time series, e.g., from the field of trajectory anonymization, assume that every single value of a time series contains sensitive information and reduce the data quality very much. In contrast, we consider time series where it is combinations of tuples that represent personal information. We propose (n; l; k)-anonymity, geared to anonymization of time-series data with minimal information loss, assuming that an adversary may learn a few data points. We propose several heuristics to obtain (n; l; k)-anonymity, and we evaluate our approach both with synthetic and real data. Our experiments confirm that it is sufficient to modify time series only moderately in order to fulfill meaningful privacy requirements.

BibTex:

    @Article{OJIS-v1i1n02_Kessler,
        title     = {Pattern-sensitive Time-series Anonymization and its Application to Energy-Consumption Data},
        author    = {Stephan Kessler and
                     Erik Buchmann and
                     Thorben Burghardt and
                     Klemens B\~{A}hm},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {3--22},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194696},
        urn       = {urn:nbn:de:101:1-201705194696},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Time series anonymization is an important problem. One prominent example of time series are energy consumption records, which might reveal details of the daily routine of a household. Existing privacy approaches for time series, e.g., from the field of trajectory anonymization, assume that every single value of a time series contains sensitive information and reduce the data quality very much. In contrast, we consider time series where it is combinations of tuples that represent personal information. We propose (n; l; k)-anonymity, geared to anonymization of time-series data with minimal information loss, assuming that an adversary may learn a few data points. We propose several heuristics to obtain (n; l; k)-anonymity, and we evaluate our approach both with synthetic and real data. Our experiments confirm that it is sufficient to modify time series only moderately in order to fulfill meaningful privacy requirements.}
    }
2 citations in 2015:

Fast summarization and anonymization of multivariate big time series

Dymitr Ruta, Ling Cen, Ernesto Damiani

In IEEE International Conference on Big Data, Big Data 2015, Santa Clara, CA, USA, Pages 1901-1904, 2015.

Privacy-Enhancing Methods for Time Series and their Impact on Electronic Markets

Stephan Kessler

2015. PhD thesis, Karlsruhe Institute of Technology (KIT)

 Open Access 

Perceived Sociability of Use and Individual Use of Social Networking Sites - A Field Study of Facebook Use in the Arctic

Juhani Iivari

Open Journal of Information Systems (OJIS), 1(1), Pages 23-53, 2014, Downloads: 12284, Citations: 9

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194708 | GNL-LP: 1132360978 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper investigates determinants of individual use of social network sites (SNSs). It introduces a new construct, Perceived Sociability of Use (PSOU), to explain the use of such computer mediated communication applications. Based on a field study of 113 Facebook users it shows that PSOU in the sense of maintaining social contacts is a significant predictor of Perceived Benefits (PB), Perceived Enjoyment (PE), attitude toward use and intention to use. Inspired by Benbasat and Barki, this paper also attempts to answer questions "what makes the system useful", "what makes the system enjoyable to use" and "what makes the system sociable to use". As a consequence it pays special focus on systems characteristics of IT applications as potential predictors of PSOU, PB and PE, introducing seven such designable qualities (user-to-user interactivity, user identifiability, system quality, information quality, usability, user-to-system interactivity, and aesthetics). The results indicate that especially satisfaction with user-to-user interactivity is a significant determinant of PSOU, and that satisfactions with six of these seven designable qualities have significant paths in the proposed nomological network.

BibTex:

    @Article{OJIS-v1i1n03_Iivari,
        title     = {Perceived Sociability of Use and Individual Use of Social Networking Sites - A Field Study of Facebook Use in the Arctic},
        author    = {Juhani Iivari},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {23--53},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194708},
        urn       = {urn:nbn:de:101:1-201705194708},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper investigates determinants of individual use of social network sites (SNSs). It introduces a new construct, Perceived Sociability of Use (PSOU), to explain the use of such computer mediated communication applications. Based on a field study of 113 Facebook users it shows that PSOU in the sense of maintaining social contacts is a significant predictor of Perceived Benefits (PB), Perceived Enjoyment (PE), attitude toward use and intention to use. Inspired by Benbasat and Barki, this paper also attempts to answer questions "what makes the system useful", "what makes the system enjoyable to use" and "what makes the system sociable to use". As a consequence it pays special focus on systems characteristics of IT applications as potential predictors of PSOU, PB and PE, introducing seven such designable qualities (user-to-user interactivity, user identifiability, system quality, information quality, usability, user-to-system interactivity, and aesthetics). The results indicate that especially satisfaction with user-to-user interactivity is a significant determinant of PSOU, and that satisfactions with six of these seven designable qualities have significant paths in the proposed nomological network.}
    }
3 citations in 2015:

Connect Me! Antecedents and Impact of Social Connectedness in Enterprise Social Software.

Maurice Kügler, Sven Dittes, Stefan Smolnik, Alexander Richter

Business & Information Systems Engineering, 57(3), Pages 181-196, 2015.

Risks and motivation in the use of social network sites: an empirical study of university students

Nugi Nkwe

2015. Dissertation, University of the Witwatersrand

Measuring university students’ awareness of finding jobs through social network sites use

Muathe Abdu

International Journal of Social Sciences and Education Research, 2(2), Pages 402-409, 2015.

 Open Access 

MapReduce-based Solutions for Scalable SPARQL Querying

José M. Giménez-Garcia, Javier D. Fernández, Miguel A. Martínez-Prieto

Open Journal of Semantic Web (OJSW), 1(1), Pages 1-18, 2014, Downloads: 11938, Citations: 10

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194824 | GNL-LP: 1132361168 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.

BibTex:

    @Article{OJSW-v1i1n02_Garcia,
        title     = {MapReduce-based Solutions for Scalable SPARQL Querying},
        author    = {Jos\~{A} M. Gim\~{A}nez-Garcia and
                     Javier D. Fern\~{A}ndez and
                     Miguel A. Mart\~{A}nez-Prieto},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {1--18},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194824},
        urn       = {urn:nbn:de:101:1-201705194824},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The use of RDF to expose semantic data on the Web has seen a dramatic increase over the last few years. Nowadays, RDF datasets are so big and rconnected that, in fact, classical mono-node solutions present significant scalability problems when trying to manage big semantic data. MapReduce, a standard framework for distributed processing of great quantities of data, is earning a place among the distributed solutions facing RDF scalability issues. In this article, we survey the most important works addressing RDF management and querying through diverse MapReduce approaches, with a focus on their main strategies, optimizations and results.}
    }
3 citations in 2015:

The Solid architecture for real-time management of big semantic data

Miguel A. Martínez-Prieto, Carlos E. Cuesta, Mario Arias, Javier D. Fernández

Future Generation Computer Systems, 47, Pages 62-79, 2015.

Scalable RDF compression with MapReduce and HDT

José Miguel Giménez García

2015. Máster en Investigación en Tecnologías de la Información y las Comunicaciones, Universidad de Valladolid

A Research on RDF Analytical Query Optimization using MapReduce with SPARQL

Pravinsinh Mori, AR Kazi, Sandip Chauhan

International Journal of Computer Science and Mobile Computing (IJCSMC), 4(5), Pages 305-313, 2015.

 Open Access 

BioSStore: A Client Interface for a Repository of Semantically Annotated Bioinformatics Web Services

Ismael Navas-Delgado, José F. Aldana-Montes

Open Journal of Semantic Web (OJSW), 1(1), Pages 19-29, 2014, Downloads: 10785, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194836 | GNL-LP: 1132361176 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Bioinformatics has shown itself to be a domain in which Web services are being used extensively. In this domain, simple but real services are being developed. Thus, there are huge repositories of real services available (for example BioMOBY main repository includes more than 1500 services). Besides, bioinformatics repositories usually have active communities using and working on improvements. However, these kinds of repositories do not exploit the full potential of Web services (and SOA, Service Oriented Applications, in general). On the other hand, sophisticated technologies have been proposed to improve SOA, including the annotation on Web services to explicitly describe them. However, these approaches are lacking in repositories with real services. In the work presented here, we address the drawbacks present in bioinformatics services and try to improve the current semantic model by introducing the use of the W3C standard Semantic Annotations for WSDL and XML Schema (SAWSDL) and related proposals (WSMO Lite). This paper focuses on a user interface that takes advantage of a repository of semantically annotated bioinformatics Web services. In this way, we exploit semantics for the discovery of Web services, showing how the use of semantics will improve the user searches. The BioSStore is available at http://biosstore.khaos.uma.es. This portal will contain also future developments of this proposal.

BibTex:

    @Article{OJSW-v1i1n03_Delgado,
        title     = {BioSStore: A Client Interface for a Repository of Semantically Annotated Bioinformatics Web Services},
        author    = {Ismael Navas-Delgado and
                     Jos\~{A} F. Aldana-Montes},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {19--29},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194836},
        urn       = {urn:nbn:de:101:1-201705194836},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Bioinformatics has shown itself to be a domain in which Web services are being used extensively. In this domain, simple but real services are being developed. Thus, there are huge repositories of real services available (for example BioMOBY main repository includes more than 1500 services). Besides, bioinformatics repositories usually have active communities using and working on improvements. However, these kinds of repositories do not exploit the full potential of Web services (and SOA, Service Oriented Applications, in general). On the other hand, sophisticated technologies have been proposed to improve SOA, including the annotation on Web services to explicitly describe them. However, these approaches are lacking in repositories with real services. In the work presented here, we address the drawbacks present in bioinformatics services and try to improve the current semantic model by introducing the use of the W3C standard Semantic Annotations for WSDL and XML Schema (SAWSDL) and related proposals (WSMO Lite). This paper focuses on a user interface that takes advantage of a repository of semantically annotated bioinformatics Web services. In this way, we exploit semantics for the discovery of Web services, showing how the use of semantics will improve the user searches. The BioSStore is available at http://biosstore.khaos.uma.es. This portal will contain also future developments of this proposal.}
    }
0 citation in 2015

 Open Access 

Developing Knowledge Models of Social Media: A Case Study on LinkedIn

Jinwu Li, Vincent Wade, Melike Sah

Open Journal of Semantic Web (OJSW), 1(2), Pages 1-24, 2014, Downloads: 15128

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194841 | GNL-LP: 1132361206 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: User Generated Content (UGC) exchanged via large Social Network is considered a very important knowledge source about all aspects of the social engagements (e.g. interests, events, personal information, personal preferences, social experience, skills etc.). However this data is inherently unstructured or semi-structured. In this paper, we describe the results of a case study on LinkedIn Ireland public profiles. The study investigated how the available knowledge could be harvested from LinkedIn in a novel way by developing and applying a reusable knowledge model using linked open data vocabularies and semantic web. In addition, the paper discusses the crawling and data normalisation strategies that we developed, so that high quality metadata could be extracted from the LinkedIn public profiles. Apart from the search engine in LinkedIn.com itself, there are no well known publicly available endpoints that allow users to query knowledge concerning the interests of individuals on LinkedIn. In particular, we present a system that extracts and converts information from raw web pages of LinkedIn public profiles into a machine-readable, interoperable format using data mining and Semantic Web technologies. The outcomes of our research can be summarized as follows: (1) A reusable knowledge model which can represent LinkedIn public users and company profiles using linked data vocabularies and structured data, (2) a public SPARQL endpoint to access structured data about Irish industry and public profiles, (3) a scalable data crawling strategy and mashup based data normalisation approach. The proposed data mining and knowledge representation proposed in this paper are evaluated in four ways: (1) We evaluate metadata quality using automated techniques, such as data completeness and data linkage. (2) Data accuracy is evaluated via user studies. In particular, accuracy is evaluated by comparison of manually entered metadata fields and the metadata which was automatically extracted. (3) User perceived metadata quality is measured by asking users to rate the automatically extracted metadata in user studies. (4) Finally, the paper discusses how the extracted metadata suits for a user interface design. Overall, the evaluations show that the extracted metadata is of high quality and meets the requirements of a data visualisation user interface.

BibTex:

    @Article{OJSW-v1i2n01_Li,
        title     = {Developing Knowledge Models of Social Media: A Case Study on LinkedIn},
        author    = {Jinwu Li and
                     Vincent Wade and
                     Melike Sah},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {1--24},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194841},
        urn       = {urn:nbn:de:101:1-201705194841},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {User Generated Content (UGC) exchanged via large Social Network is considered a very important knowledge source about all aspects of the social engagements (e.g. interests, events, personal information, personal preferences, social experience, skills etc.). However this data is inherently unstructured or semi-structured. In this paper, we describe the results of a case study on LinkedIn Ireland public profiles. The study investigated how the available knowledge could be harvested from LinkedIn in a novel way by developing and applying a reusable knowledge model using linked open data vocabularies and semantic web. In addition, the paper discusses the crawling and data normalisation strategies that we developed, so that high quality metadata could be extracted from the LinkedIn public profiles. Apart from the search engine in LinkedIn.com itself, there are no well known publicly available endpoints that allow users to query knowledge concerning the interests of individuals on LinkedIn. In particular, we present a system that extracts and converts information from raw web pages of LinkedIn public profiles into a machine-readable, interoperable format using data mining and Semantic Web technologies. The outcomes of our research can be summarized as follows: (1) A reusable knowledge model which can represent LinkedIn public users and company profiles using linked data vocabularies and structured data, (2) a public SPARQL endpoint to access structured data about Irish industry and public profiles, (3) a scalable data crawling strategy and mashup based data normalisation approach. The proposed data mining and knowledge representation proposed in this paper are evaluated in four ways: (1) We evaluate metadata quality using automated techniques, such as data completeness and data linkage. (2) Data accuracy is evaluated via user studies. In particular, accuracy is evaluated by comparison of manually entered metadata fields and the metadata which was automatically extracted. (3) User perceived metadata quality is measured by asking users to rate the automatically extracted metadata in user studies. (4) Finally, the paper discusses how the extracted metadata suits for a user interface design. Overall, the evaluations show that the extracted metadata is of high quality and meets the requirements of a data visualisation user interface.}
    }
0 citations in 2015

 Open Access 

SIWeb: understanding the Interests of the Society through Web data Analysis

Marco Furini, Simone Montangero

Open Journal of Web Technologies (OJWT), 1(1), Pages 1-14, 2014, Downloads: 11917, Citations: 4

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291334 | GNL-LP: 1133021522 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The high availability of user-generated contents in the Web scenario represents a tremendous asset for understanding various social phenomena. Methods and commercial products that exploit the widespread use of the Web as a way of conveying personal opinions have been proposed, but a critical thinking is that these approaches may produce a partial, or distorted, understanding of the society, because most of them focus on definite scenarios, use specific platforms, base their analysis on the sole magnitude of data, or treat the different Web resources with the same importance. In this paper, we present SIWeb (Social Interests through Web Analysis), a novel mechanism designed to measure the interest the society has on a topic (e.g., a real world phenomenon, an event, a person, a thing). SIWeb is general purpose (it can be applied to any decision making process), cross platforms (it uses the entire Webspace, from social media to websites, from tags to reviews), and time effective (it measures the time correlatio between the Web resources). It uses fractal analysis to detect the temporal relations behind all the Web resources (e.g., Web pages, RSS, newsgroups, etc.) that talk about a topic and combines this number with the temporal relations to give an insight of the the interest the society has about a topic. The evaluation of the proposal shows that SIWeb might be helpful in decision making processes as it reflects the interests the society has on a specific topic.

BibTex:

    @Article{OJWT-v1i1n01_Furini,
        title     = {SIWeb: understanding the Interests of the Society through Web data Analysis},
        author    = {Marco Furini and
                     Simone Montangero},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {1--14},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291334},
        urn       = {urn:nbn:de:101:1-201705291334},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The high availability of user-generated contents in the Web scenario represents a tremendous asset for understanding various social phenomena. Methods and commercial products that exploit the widespread use of the Web as a way of conveying personal opinions have been proposed, but a critical thinking is that these approaches may produce a partial, or distorted, understanding of the society, because most of them focus on definite scenarios, use specific platforms, base their analysis on the sole magnitude of data, or treat the different Web resources with the same importance. In this paper, we present SIWeb (Social Interests through Web Analysis), a novel mechanism designed to measure the interest the society has on a topic (e.g., a real world phenomenon, an event, a person, a thing). SIWeb is general purpose (it can be applied to any decision making process), cross platforms (it uses the entire Webspace, from social media to websites, from tags to reviews), and time effective (it measures the time correlatio between the Web resources). It uses fractal analysis to detect the temporal relations behind all the Web resources (e.g., Web pages, RSS, newsgroups, etc.) that talk about a topic and combines this number with the temporal relations to give an insight of the the interest the society has about a topic. The evaluation of the proposal shows that SIWeb might be helpful in decision making processes as it reflects the interests the society has on a specific topic.}
    }
1 citation in 2015:

TRank: Ranking Twitter users according to specific topics.

Manuela Montangero, Marco Furini

In 12th Annual IEEE Consumer Communications and Networking Conference, CCNC 2015, Las Vegas, NV, USA, January 9-12, 2015, Pages 767-772, 2015.

 Open Access 

Integrating Human Factors and Semantic Mark-ups in Adaptive Interactive Systems

Marios Belk, Panagiotis Germanakos, Efi Papatheocharous, Panayiotis Andreou, George Samaras

Open Journal of Web Technologies (OJWT), 1(1), Pages 15-26, 2014, Downloads: 11989, Citations: 1

Full-Text: pdf | URN: urn:nbn:de:101:1-2017052611313 | GNL-LP: 113283600X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: This paper focuses on incorporating individual differences in cognitive processing and semantic mark-ups in the context of adaptive interactive systems. In particular, a semantic Web-based adaptation framework is proposed that enables Web content providers to enrich content and functionality of Web environments with semantic mark-ups. The Web content is created using a Web authoring tool and is further processed and reconstructed by an adaptation mechanism based on cognitive factors of users. Main aim of this work is to investigate the added value of personalising content and functionality of Web environments based on the unique cognitive characteristics of users. Accordingly, a user study has been conducted that entailed a psychometric-based survey for extracting the users' cognitive characteristics, combined with a real usage scenario of an existing commercial Web environment that was enriched with semantic mark-ups and personalised based on different adaptation effects. The paper provides interesting insights in the design and development of adaptive interactive systems based on cognitive factors and semantic mark-ups.

BibTex:

    @Article{OJWT-v1i1n02_Belk,
        title     = {Integrating Human Factors and Semantic Mark-ups in Adaptive Interactive Systems},
        author    = {Marios Belk and
                     Panagiotis Germanakos and
                     Efi Papatheocharous and
                     Panayiotis Andreou and
                     George Samaras},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2014},
        volume    = {1},
        number    = {1},
        pages     = {15--26},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017052611313},
        urn       = {urn:nbn:de:101:1-2017052611313},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {This paper focuses on incorporating individual differences in cognitive processing and semantic mark-ups in the context of adaptive interactive systems. In particular, a semantic Web-based adaptation framework is proposed that enables Web content providers to enrich content and functionality of Web environments with semantic mark-ups. The Web content is created using a Web authoring tool and is further processed and reconstructed by an adaptation mechanism based on cognitive factors of users. Main aim of this work is to investigate the added value of personalising content and functionality of Web environments based on the unique cognitive characteristics of users. Accordingly, a user study has been conducted that entailed a psychometric-based survey for extracting the users' cognitive characteristics, combined with a real usage scenario of an existing commercial Web environment that was enriched with semantic mark-ups and personalised based on different adaptation effects. The paper provides interesting insights in the design and development of adaptive interactive systems based on cognitive factors and semantic mark-ups.}
    }
0 citation in 2015

 Open Access 

Using Business Intelligence to Improve DBA Productivity

Eric A. Mortensen, En Cheng

Open Journal of Databases (OJDB), 1(2), Pages 1-16, 2014, Downloads: 12228

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194595 | GNL-LP: 1132360854 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: The amount of data collected and used by companies has grown rapidly in size over the last decade. Business leaders are now using Business Intelligence (BI) systems to make effective business decisions against large amounts of data. The growth in the size of data has been a major challenge for Database Administrators (DBAs). The increase in the number and size of databases at the speed they have grown has made it difficult for DBA teams to provide the same level of service that the business requires they provide. The methods that DBAs have used in the last several decades can no longer be performed with the efficiency needed over all of the databases they administer. This paper presents the first BI system to improve DBA productivity and providing important data metrics for Information Technology (IT) managers. The BI system has been well received by Sherwin Williams Database Administrators. It has i) enabled the DBA team to quickly determine which databases needed work by a DBA without manually logging into the system; ii) helped the DBA team and its management to easily answer other business users' questions without using DBAs' time to research the issue; and iii) helped the DBA team to provide the business data for unanticipated audit request.

BibTex:

    @Article{OJDB-v1i2n01_Mortensen,
        title     = {Using Business Intelligence to Improve DBA Productivity},
        author    = {Eric A. Mortensen and
                     En Cheng},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {1--16},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194595},
        urn       = {urn:nbn:de:101:1-201705194595},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {The amount of data collected and used by companies has grown rapidly in size over the last decade.  Business leaders are now using Business Intelligence (BI) systems to make effective business decisions against large amounts of data. The growth in the size of data has been a major challenge for Database Administrators (DBAs). The increase in the number and size of databases at the speed they have grown has made it difficult for DBA teams to provide the same level of service that the business requires they provide. The methods that DBAs have used in the last several decades can no longer be performed with the efficiency needed over all of the databases they administer. This paper presents the first BI system to improve DBA productivity and providing important data metrics for Information Technology (IT) managers. The BI system has been well received by Sherwin Williams Database Administrators.  It has i) enabled the DBA team to quickly determine which databases needed work by a DBA without manually logging into the system; ii) helped the DBA team and its management to easily answer other business users' questions without using DBAs' time to research the issue; and iii) helped the DBA team to provide the business data for unanticipated audit request.}
    }
0 citations in 2015

 Open Access 

Which NoSQL Database? A Performance Overview

Veronika Abramova, Jorge Bernardino, Pedro Furtado

Open Journal of Databases (OJDB), 1(2), Pages 17-24, 2014, Downloads: 29760, Citations: 89

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194607 | GNL-LP: 1132360862 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.

BibTex:

    @Article{OJDB-v1i2n02_Abramova,
        title     = {Which NoSQL Database? A Performance Overview},
        author    = {Veronika Abramova and
                     Jorge Bernardino and
                     Pedro Furtado},
        journal   = {Open Journal of Databases (OJDB)},
        issn      = {2199-3459},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {17--24},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194607},
        urn       = {urn:nbn:de:101:1-201705194607},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {NoSQL data stores are widely used to store and retrieve possibly large amounts of data, typically in a key-value format. There are many NoSQL types with different performances, and thus it is important to compare them in terms of performance and verify how the performance is related to the database type. In this paper, we evaluate five most popular NoSQL databases: Cassandra, HBase, MongoDB, OrientDB and Redis. We compare those databases in terms of query performance, based on reads and updates, taking into consideration the typical workloads, as represented by the Yahoo! Cloud Serving Benchmark. This comparison allows users to choose the most appropriate database according to the specific mechanisms and application needs.}
    }
14 citations in 2015:

Choosing the right NoSQL database for the job: a quality attribute evaluation

João Ricardo Lourenço, Bruno Cabral, Paulo Carreiro, Marco Vieira, Jorge Bernardino

Journal of Big Data, 2(1), Pages 18, 2015.

An advanced comparative study of the most promising nosql and newsql databases with a multi-criteria analysis method

Omar Hajoui, Rachid Dehbi, Mohammed Talea, Zouhair Ibn Batouta

Journal of Theoretical and Applied Information Technology, 81(3), Pages 579, 2015.

Experimental evaluation of a flexible I/O architecture for accelerating workflow engines in cloud environments

Francisco Rodrigo Duro, Javier García Blas, Florin Isaila, Jesús Carretero

In Proceedings of the 2015 International Workshop on Data-Intensive Scalable Computing Systems, DISCS@SC 2015, Austin, Texas, USA, Pages 6:1-6:8, 2015.

Database technologies in the world of big data

Jaroslav Pokorný

In Proceedings of the 16th International Conference on Computer Systems and Technologies, CompSysTech, Dublin, Ireland, Pages 1-12, 2015.

NoSQL Databases and Data Modeling Techniques for a Document-oriented NoSQL Database

Robert T. Mason

In Proceedings of Informing Science & IT Education Conference (InSITE), Pages 259-268, 2015.

Performance and scalability of voldemort NoSQL

Ricardo Neves, Jorge Bernardino

In 10th Iberian Conference on Information Systems and Technologies (CISTI), Pages 1-6, 2015.

Event based transient notification architecture and NoSQL solution for astronomical data management

Yu Zhao

2015. A thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Computer Science at Massey University, Albany (Auckland), New Zealand

Aligning Machine Learning for the Lambda Architecture

Visakh Nair

2015. Master’s Thesis at Aalto University

Data Analysis in the Cloud: Models, Techniques and Applications

Domenico Talia, Paolo Trunfio, Fabrizio Marozzo

2015. Elsevier Science Publishers B. V.

Optymalizacja parametrów aplikacji w procesie wytwarzania oprogramowania dla big data (Optimization of Big Data Application Attributes considering Software Development Process)

Pawel Kaczmarek

Zeszyty Naukowe Wydziału Elektrotechniki i Automatyki Politechniki Gdańskiej, 2015.

Performance measurement of heterogeneous workflow engines

Marco Argenti

2015. Master thesis at Università della Svizzera Italiana

Анализ производительности РСУБД PostgreSQL и NoSQL-хранилищ Redis и Apache Cassandra

Артем Павлович Суманеев

2015.

A performance evaluation of SQL and NOSQL Database on HealthCare Data

Taj Eldeen Abubaker

2015. Master thesis at University of Science and Technology, Oumdorman , Sudan

Evaluation of MongoDB and Cassandra Database Performance on HealthCare Data

Mohammed Hussein Mohammed Musa

2015. Master thesis at University of Science and Technology, Oumdorman , Sudan

 Open Access 

P-LUPOSDATE: Using Precomputed Bloom Filters to Speed Up SPARQL Processing in the Cloud

Sven Groppe, Thomas Kiencke, Stefan Werner, Dennis Heinrich, Marc Stelzner, Le Gruenwald

Open Journal of Semantic Web (OJSW), 1(2), Pages 25-55, 2014, Downloads: 14742, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194858 | GNL-LP: 1132361214 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Presentation: Video

Abstract: Increasingly data on the Web is stored in the form of Semantic Web data. Because of today's information overload, it becomes very important to store and query these big datasets in a scalable way and hence in a distributed fashion. Cloud Computing offers such a distributed environment with dynamic reallocation of computing and storing resources based on needs. In this work we introduce a scalable distributed Semantic Web database in the Cloud. In order to reduce the number of (unnecessary) intermediate results early, we apply bloom filters. Instead of computing bloom filters, a time-consuming task during query processing as it has been done traditionally, we precompute the bloom filters as much as possible and store them in the indices besides the data. The experimental results with data sets up to 1 billion triples show that our approach speeds up query processing significantly and sometimes even reduces the processing time to less than half.

BibTex:

    @Article{OJSW-v1i2n02_Groppe,
        title     = {P-LUPOSDATE: Using Precomputed Bloom Filters to Speed Up SPARQL Processing in the Cloud},
        author    = {Sven Groppe and
                     Thomas Kiencke and
                     Stefan Werner and
                     Dennis Heinrich and
                     Marc Stelzner and
                     Le Gruenwald},
        journal   = {Open Journal of Semantic Web (OJSW)},
        issn      = {2199-336X},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {25--55},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194858},
        urn       = {urn:nbn:de:101:1-201705194858},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Increasingly data on the Web is stored in the form of Semantic Web data. Because of today's information overload, it becomes very important to store and query these big datasets in a scalable way and hence in a distributed fashion. Cloud Computing offers such a distributed environment with dynamic reallocation of computing and storing resources based on needs. In this work we introduce a scalable distributed Semantic Web database in the Cloud. In order to reduce the number of (unnecessary) intermediate results early, we apply bloom filters. Instead of computing bloom filters, a time-consuming task during query processing as it has been done traditionally, we precompute the bloom filters as much as possible and store them in the indices besides the data. The experimental results with data sets up to 1 billion triples show that our approach speeds up query processing significantly and sometimes even reduces the processing time to less than half.}
    }
0 citation in 2015

 Open Access 

A Comparative Evaluation of Current HTML5 Web Video Implementations

Martin Hoernig, Andreas Bigontina, Bernd Radig

Open Journal of Web Technologies (OJWT), 1(2), Pages 1-9, 2014, Downloads: 28321, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291328 | GNL-LP: 1133021514 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: HTML5 video is the upcoming standard for playing videos on the World Wide Web. Although its specification has not been fully adopted yet, all major browsers provide the HTML5 video element and web developers already rely on its functionality. But there are differences between implementations and inaccuracies that trouble the web developer community. To help to improve the current situation we draw a comparison between the most important web browsers. We focus on the event mechanism, since it is essential for interacting with the video element. Furthermore, we compare the seeking accuracy, which is relevant for more specialized applications. Our tests reveal varieties of differences between browser interfaces and show that even simple software solutions may still need third-party plugins in today's browsers.

BibTex:

    @Article{OJWT-v1i2n01_Hoernig,
        title     = {A Comparative Evaluation of Current HTML5 Web Video Implementations},
        author    = {Martin Hoernig and
                     Andreas Bigontina and
                     Bernd Radig},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {1--9},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291328},
        urn       = {urn:nbn:de:101:1-201705291328},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {HTML5 video is the upcoming standard for playing videos on the World Wide Web. Although its specification has not been fully adopted yet, all major browsers provide the HTML5 video element and web developers already rely on its functionality. But there are differences between implementations and inaccuracies that trouble the web developer community. To help to improve the current situation we draw a comparison between the most important web browsers. We focus on the event mechanism, since it is essential for interacting with the video element. Furthermore, we compare the seeking accuracy, which is relevant for more specialized applications. Our tests reveal varieties of differences between browser interfaces and show that even simple software solutions may still need third-party plugins in today's browsers.}
    }
1 citation in 2015:

WVSNP-DASH: Name-Based Segmented Video Streaming

Adolph Seema, Lukas Schwoebel, Tejas Shah, Jeffery Morgan, Martin Reisslein

IEEE Transactions on Broadcasting (TBC), 61(3), Pages 346-355, 2015.

 Open Access 

A Self-Optimizing Cloud Computing System for Distributed Storage and Processing of Semantic Web Data

Sven Groppe, Johannes Blume, Dennis Heinrich, Stefan Werner

Open Journal of Cloud Computing (OJCC), 1(2), Pages 1-14, 2014, Downloads: 13629, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194478 | GNL-LP: 113236065X | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Clouds are dynamic networks of common, off-the-shell computers to build computation farms. The rapid growth of databases in the context of the semantic web requires efficient ways to store and process this data. Using cloud technology for storing and processing Semantic Web data is an obvious way to overcome difficulties in storing and processing the enormously large present and future datasets of the Semantic Web. This paper presents a new approach for storing Semantic Web data, such that operations for the evaluation of Semantic Web queries are more likely to be processed only on local data, instead of using costly distributed operations. An experimental evaluation demonstrates the performance improvements in comparison to a naive distribution of Semantic Web data.

BibTex:

    @Article{OJCC-v1i2n01_Groppe,
        title     = {A Self-Optimizing Cloud Computing System for Distributed Storage and Processing of Semantic Web Data},
        author    = {Sven Groppe and
                     Johannes Blume and
                     Dennis Heinrich and
                     Stefan Werner},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {1--14},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194478},
        urn       = {urn:nbn:de:101:1-201705194478},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Clouds are dynamic networks of common, off-the-shell computers to build computation farms. The rapid growth of databases in the context of the semantic web requires efficient ways to store and process this data. Using cloud technology for storing and processing Semantic Web data is an obvious way to overcome difficulties in storing and processing the enormously large present and future datasets of the Semantic Web. This paper presents a new approach for storing Semantic Web data, such that operations for the evaluation of Semantic Web queries are more likely to be processed only on local data, instead of using costly distributed operations. An experimental evaluation demonstrates the performance improvements in comparison to a naive distribution of Semantic Web data.}
    }
0 citation in 2015

 Open Access 

Evaluation of Node Failures in Cloud Computing Using Empirical Data

Abdulelah Alwabel, Robert John Walters, Gary B. Wills

Open Journal of Cloud Computing (OJCC), 1(2), Pages 15-24, 2014, Downloads: 10208, Citations: 3

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194435 | GNL-LP: 1132360609 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Cloud has emerged as a new computing paradigm that promises to move into computing-as-utility era. Desktop Cloud is a new type of Cloud computing introduced to further achieve this ambition with an aim to reduce costs. It merges two computing models: Cloud computing and volunteer computing. The aim of Desktop Cloud is to provide Cloud services out of infrastructure that is not made for this purpose, like PCs and laptops. Such computing resources lead to a high level of volatility as a result of the fact that they can leave without prior knowledge. This paper studies the impact of node failures using evaluation metrics based on real data collected from public archive to simulate failure events in the infrastructure of a Desktop Cloud. The contribution of this paper is: (i) analysing the failure events, (ii) proposing metrics to evaluate Desktop Clouds, and (iii) evaluating several VM allocation mechanisms in the presence of node failures.

BibTex:

    @Article{OJCC-2014v1i2n02_Alwabel,
        title     = {Evaluation of Node Failures in Cloud Computing Using Empirical Data},
        author    = {Abdulelah Alwabel and
                     Robert John Walters and
                     Gary B. Wills},
        journal   = {Open Journal of Cloud Computing (OJCC)},
        issn      = {2199-1987},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {15--24},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194435},
        urn       = {urn:nbn:de:101:1-201705194435},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Cloud has emerged as a new computing paradigm that promises to move into computing-as-utility era. Desktop Cloud is a new type of Cloud computing introduced to further achieve this ambition with an aim to reduce costs. It merges two computing models: Cloud computing and volunteer computing. The aim of Desktop Cloud is to provide Cloud services out of infrastructure that is not made for this purpose, like PCs and laptops. Such computing resources lead to a high level of volatility as a result of the fact that they can leave without prior knowledge. This paper studies the impact of node failures using evaluation metrics based on real data collected from public archive to simulate failure events in the infrastructure of a Desktop Cloud. The contribution of this paper is: (i) analysing the failure events, (ii) proposing metrics to evaluate Desktop Clouds, and (iii) evaluating several VM allocation mechanisms in the presence of node failures.}
    }
2 citations in 2015:

Emerging Software as a Service and Analytics

Victor Chang, Robert John Walters, Gary B. Wills

Open Journal of Cloud Computing (OJCC), 2(1), Pages 1-3, 2015.

Evaluation Metrics for VM Allocation Mechanisms in Desktop Clouds.

Abdulelah Alwabel, Robert John Walters, Gary B. Wills

In ESaaSA 2015 - Proceedings of the 2nd International Workshop on Emerging Software as a Service and Analytics, Lisbon, Portugal, 20-22 May, 2015., Pages 63-68, 2015.

 Open Access 

Detecting Data-Flow Errors in BPMN 2.0

Silvia von Stackelberg, Susanne Putze, Jutta Mülle, Klemens Böhm

Open Journal of Information Systems (OJIS), 1(2), Pages 1-19, 2014, Downloads: 13882, Citations: 41

Full-Text: pdf | URN: urn:nbn:de:101:1-2017052611934 | GNL-LP: 1132836972 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Data-flow errors in BPMN 2.0 process models, such as missing or unused data, lead to undesired process executions. In particular, since BPMN 2.0 with a standardized execution semantics allows specifying alternatives for data as well as optional data, identifying missing or unused data systematically is difficult. In this paper, we propose an approach for detecting data-flow errors in BPMN 2.0 process models. We formalize BPMN process models by mapping them to Petri Nets and unfolding the execution semantics regarding data. We define a set of anti-patterns representing data-flow errors of BPMN 2.0 process models. By employing the anti-patterns, our tool performs model checking for the unfolded Petri Nets. The evaluation shows that it detects all data-flow errors identified by hand, and so improves process quality.

BibTex:

    @Article{OJIS-2014v1i2n01_Stackelberg,
        title     = {Detecting Data-Flow Errors in BPMN 2.0},
        author    = {Silvia von Stackelberg and
                     Susanne Putze and
                     Jutta M\~{A}lle and
                     Klemens B\~{A}hm},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {1--19},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-2017052611934},
        urn       = {urn:nbn:de:101:1-2017052611934},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Data-flow errors in BPMN 2.0 process models, such as missing or unused data, lead to undesired process executions. In particular, since BPMN 2.0 with a standardized execution semantics allows specifying alternatives for data as well as optional data, identifying missing or unused data systematically is difficult. In this paper, we propose an approach for detecting data-flow errors in BPMN 2.0 process models. We formalize BPMN process models by mapping them to Petri Nets and unfolding the execution semantics regarding data. We define a set of anti-patterns representing data-flow errors of BPMN 2.0 process models. By employing the anti-patterns, our tool performs model checking for the unfolded Petri Nets. The evaluation shows that it detects all data-flow errors identified by hand, and so improves process quality.}
    }
1 citation in 2015:

Data perspective in business process management

Andreas Meyer

2015. Dissertation, Universität Potsdam

 Open Access 

Fuzzy Color Space for Apparel Coordination

Pakizar Shamoi, Atsushi Inoue, Hiroharu Kawanaka

Open Journal of Information Systems (OJIS), 1(2), Pages 20-28, 2014, Downloads: 9905, Citations: 7

Full-Text: pdf | URN: urn:nbn:de:101:1-201705194710 | GNL-LP: 1132360994 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Human perception of colors constitutes an important part in color theory. The applications of color science are truly omnipresent, and what impression colors make on human plays a vital role in them. In this paper, we offer the novel approach for color information representation and processing using fuzzy sets and logic theory, which is extremely useful in modeling human impressions. Specifically, we use fuzzy mathematics to partition the gamut of feasible colors in HSI color space based on standard linguistic tags. The proposed method can be useful in various image processing applications involving query processing. We demonstrate its effectivity in the implementation of a framework for the apparel online shopping coordination based on a color scheme. It deserves attention, since there is always some uncertainty inherent in the description of apparels.

BibTex:

    @Article{OJIS_2014v1i2n02_Shamoi,
        title     = {Fuzzy Color Space for Apparel Coordination},
        author    = {Pakizar Shamoi and
                     Atsushi Inoue and
                     Hiroharu Kawanaka},
        journal   = {Open Journal of Information Systems (OJIS)},
        issn      = {2198-9281},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {20--28},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705194710},
        urn       = {urn:nbn:de:101:1-201705194710},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Human perception of colors constitutes an important part in color theory. The applications of color science are truly omnipresent, and what impression colors make on human plays a vital role in them. In this paper, we offer the novel approach for color information representation and processing using fuzzy sets and logic theory, which is extremely useful in modeling human impressions. Specifically, we use fuzzy mathematics to partition the gamut of feasible colors in HSI color space based on standard linguistic tags. The proposed method can be useful in various image processing applications involving query processing. We demonstrate its effectivity in the implementation of a framework for the apparel online shopping coordination based on a color scheme. It deserves attention, since there is always some uncertainty inherent in the description of apparels.}
    }
2 citations in 2015:

On Fuzzification of Color Spaces for Medical Decision Support in Video Capsule Endoscopy

V. B. Surya Prasath

In Proceedings of the 26th Modern AI and Cognitive Science Conference 2015, Greensboro, NC, USA, Pages 147-151, 2015.

Deep Color Semantics for E-commerce Content-based Image Retrieval

Pakizar Shamoi, Atsushi Inoue, Hiroharu Kawanaka

In Proceedings of the Workshop on Fuzzy Logic in AI, FLinAI 2015, co-located with the 24th International Joint Conference on Artificial Intelligence (IJCAI 2015), Buenos Aires, Argentina, 2015.

 Open Access 

Getting Indexed by Bibliographic Databases in the Area of Computer Science

Arne Kusserow, Sven Groppe

Open Journal of Web Technologies (OJWT), 1(2), Pages 10-27, 2014, Downloads: 14058, Citations: 2

Full-Text: pdf | URN: urn:nbn:de:101:1-201705291343 | GNL-LP: 1133021557 | Meta-Data: tex xml rdf rss | Show/Hide Abstract | Show/Hide BibTex

Abstract: Every author and publisher is interested in adding their publications to the widely used bibliographic databases freely accessible in the world wide web: This ensures the visibility of their publications and hence of the published research. However, the inclusion requirements of publications in the bibliographic databases are heterogeneous even on the technical side. This survey paper aims in shedding light on the various data formats, protocols and technical requirements of getting indexed by widely used bibliographic databases in the area of computer science and provides hints for maximal database inclusion. Furthermore, we point out the possibilities to utilize the data of bibliographic databases, and describes some personal and institutional research repository systems with special regard to the support of inclusion in bibliographic databases.

BibTex:

    @Article{OJWT_2014v1i2n02_Kusserow,
        title     = {Getting Indexed by Bibliographic Databases in the Area of Computer Science},
        author    = {Arne Kusserow and
                     Sven Groppe},
        journal   = {Open Journal of Web Technologies (OJWT)},
        issn      = {2199-188X},
        year      = {2014},
        volume    = {1},
        number    = {2},
        pages     = {10--27},
        url       = {http://nbn-resolving.de/urn:nbn:de:101:1-201705291343},
        urn       = {urn:nbn:de:101:1-201705291343},
        publisher = {RonPub},
        bibsource = {RonPub},
        abstract = {Every author and publisher is interested in adding their publications to the widely used bibliographic databases freely accessible in the world wide web: This ensures the visibility of their publications and hence of the published research. However, the inclusion requirements of publications in the bibliographic databases are heterogeneous even on the technical side. This survey paper aims in shedding light on the various data formats, protocols and technical requirements of getting indexed by widely used bibliographic databases in the area of computer science and provides hints for maximal database inclusion. Furthermore, we point out the possibilities to utilize the data of bibliographic databases, and describes some personal and institutional research repository systems with special regard to the support of inclusion in bibliographic databases.}
    }
0 citation in 2015