The development of the World Wide Web, the emergence of social media and Big Data have led to a rising amount of data. Information and Communication Technologies (ICTs) affect the environment in various ways. Their energyconsumption is growing exponentially, with and without the use of ‘green’ energy. Increasing environmental awareness has led to discussions on sustainable development. The data deluge makes it not only necessary to pay attention to the hard- and software dimensions of ICTs but also to the ‘value’ of the data stored. In this paper, we study the possibility to methodically reduce the amount of stored data and records in organizations based on the ‘value’ of information, using the Green Archiving Model we have developed. Reducing the amount of data and records in organizations helps in allowing organizations to fight the data deluge and to realize the objectives of both Digital Archiving and Green IT. At the same time, methodically deleting data and records should reduce the consumption of electricity for data storage. As a consequence, the organizational cost for electricity use should be reduced. Our research showed that the model can be used to reduce [1] the amount of data (45 percent, using Archival Retention Levels and Retention Schedules) and [2] the electricity consumption for data storage (resulting in a cost reduction of 35 percent). Our research indicates that the Green Archiving Model is a viable model to reduce the amount of stored data and records and to curb electricity use for storage in organizations. This paper is the result of the first stage of a research project that is aimed at developing low power ICTs that will automatically appraise, select, preserve or permanently delete data based on their ‘value’. Such an ICT will automatically reduce storage capacity and reduce electricity consumption used for data storage. At the same time, data disposal will reduce overload caused by storing the same data in different formats, it will lower costs and it reduces the potential forliability.
The development of the World Wide Web, the emergence of social media and Big Data have led to a rising amount of data. Information and Communication Technologies (ICTs) affect the environment in various ways. Their energyconsumption is growing exponentially, with and without the use of ‘green’ energy. Increasing environmental awareness has led to discussions on sustainable development. The data deluge makes it not only necessary to pay attention to the hard- and software dimensions of ICTs but also to the ‘value’ of the data stored. In this paper, we study the possibility to methodically reduce the amount of stored data and records in organizations based on the ‘value’ of information, using the Green Archiving Model we have developed. Reducing the amount of data and records in organizations helps in allowing organizations to fight the data deluge and to realize the objectives of both Digital Archiving and Green IT. At the same time, methodically deleting data and records should reduce the consumption of electricity for data storage. As a consequence, the organizational cost for electricity use should be reduced. Our research showed that the model can be used to reduce [1] the amount of data (45 percent, using Archival Retention Levels and Retention Schedules) and [2] the electricity consumption for data storage (resulting in a cost reduction of 35 percent). Our research indicates that the Green Archiving Model is a viable model to reduce the amount of stored data and records and to curb electricity use for storage in organizations. This paper is the result of the first stage of a research project that is aimed at developing low power ICTs that will automatically appraise, select, preserve or permanently delete data based on their ‘value’. Such an ICT will automatically reduce storage capacity and reduce electricity consumption used for data storage. At the same time, data disposal will reduce overload caused by storing the same data in different formats, it will lower costs and it reduces the potential forliability.
Automated Analysis of Human Performance Data could help to understand and possibly predict the performance of the human. To inform future research and enable Automated Analysis of Human Performance Data a systematic mapping study (scoping study) on the state-of-the-art knowledge is performed on three interconnected components(i)Human Performance (ii) Monitoring Human Performance and (iii) Automated Data Analysis. Using a systematic method of Kitchenham and Charters for performing the systematic mapping study, resulted in a comprehensive search for studies and a categorisation the studies using a qualitative method. This systematic mapping review extends the philosophy of Shyr and Spisic, and Knuth and represents the state-of-art knowledge on Human Performance,Monitoring Human Performance and Automated Data Analysis
The scientific publishing industry is rapidly transitioning towards information analytics. This shift is disproportionately benefiting large companies. These can afford to deploy digital technologies like knowledge graphs that can index their contents and create advanced search engines. Small and medium publishing enterprises, instead, often lack the resources to fully embrace such digital transformations. This divide is acutely felt in the arts, humanities and social sciences. Scholars from these disciplines are largely unable to benefit from modern scientific search engines, because their publishing ecosystem is made of many specialized businesses which cannot, individually, develop comparable services. We propose to start bridging this gap by democratizing access to knowledge graphs – the technology underpinning modern scientific search engines – for small and medium publishers in the arts, humanities and social sciences. Their contents, largely made of books, already contain rich, structured information – such as references and indexes – which can be automatically mined and interlinked. We plan to develop a framework for extracting structured information and create knowledge graphs from it. We will as much as possible consolidate existing proven technologies into a single codebase, instead of reinventing the wheel. Our consortium is a collaboration of researchers in scientific information mining, Odoma, an AI consulting company, and the publisher Brill, sharing its data and expertise. Brill will be able to immediately put to use the project results to improve its internal processes and services. Furthermore, our results will be published in open source with a commercial-friendly license, in order to foster the adoption and future development of the framework by other publishers. Ultimately, our proposal is an example of industry innovation where, instead of scaling-up, we scale wide by creating a common resource which many small players can then use and expand upon.
ILIAD builds on the assets resulting from two decades of investments in policies and infrastructures for the blue economy and aims at establishing an interoperable, data-intensive, and cost-effective Digital Twin of the Ocean (DTO). It capitalizes on the explosion of new data provided by many different earth sources, advanced computing infrastructures (cloud computing, HPC, Internet of Things, Big Data, social networking, and more) in an inclusive, virtual/augmented, and engaging fashion to address all Earth Data challenges. It will contribute towards a sustainable ocean economy as defined by the Centre for the Fourth Industrial Revolution and the Ocean, a hub for global, multi-stakeholder co-operation.