2024
|
28. | | Moritz Bock (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Visual Analytics on Digital Product Passports to Optimize Production Chains Master Thesis TU Darmstadt, 2024, Master Sc. Thesis. @mastersthesis{Bock2024,
title = {Visual Analytics on Digital Product Passports to Optimize Production Chains},
author = {Moritz Bock and Arjan Kuijper and Dirk Burkhardt},
year = {2024},
date = {2024-12-13},
urldate = {2024-12-13},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {To address the climate crisis, the European Union has introduced the European New Green Deal, including the Digital Product Passport (DPP) to promote transparency of production data, especially sustainability data. This thesis explores the potential of DPP data to drive sustainability in production chains through Visual Analytics (VA). DPPs allow companies to analyze their sustainability across all stages of production, offering significant potential for the reducing environmental impact.
Given its novelty, the DPP has yet to be established as tool for product optimization. Related work focuses more on ensuring regulatory compliance than leveraging its potential for product chain optimization. Hence, solutions are sparse and, moreover, often involve tedious processes due to heterogeneous and limited data—a problem the DPP aims to solve. Accordingly, this thesis explores the potential of the DPP and VA to enhance sustainability in production chains.
To address the aforementioned issues we developed a concept which comprises a humancentered VA approach, enabling users to analyze and simulate sustainability efforts over the whole production chain through interactive and intuitive visualizations. The Asset Administration Shell (AAS) was selected as the framework representing DPPs due to its viability, as determined by an analysis of its contents.
The implementation encompasses a web-based VA system, integrating a standardized interface into an AAS-based backend. The user interface provides tools to navigate available products, identify high emitting products, track emissions, and identify specific life cycle stages or transport routes that contribute to greenhouse gas emissions. Users can test the impact of changes, such as replacing components, on emissions.
Use case analyses confirmed the system’s effectiveness in supporting sustainable manufacturing despite some limitations due to inconsistencies in the AAS ecosystem. The system consistently provided valuable insights, emphasizing the viability leveraging DPPs to support environmentally responsible, data-driven manufacturing decisions.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
To address the climate crisis, the European Union has introduced the European New Green Deal, including the Digital Product Passport (DPP) to promote transparency of production data, especially sustainability data. This thesis explores the potential of DPP data to drive sustainability in production chains through Visual Analytics (VA). DPPs allow companies to analyze their sustainability across all stages of production, offering significant potential for the reducing environmental impact.
Given its novelty, the DPP has yet to be established as tool for product optimization. Related work focuses more on ensuring regulatory compliance than leveraging its potential for product chain optimization. Hence, solutions are sparse and, moreover, often involve tedious processes due to heterogeneous and limited data—a problem the DPP aims to solve. Accordingly, this thesis explores the potential of the DPP and VA to enhance sustainability in production chains.
To address the aforementioned issues we developed a concept which comprises a humancentered VA approach, enabling users to analyze and simulate sustainability efforts over the whole production chain through interactive and intuitive visualizations. The Asset Administration Shell (AAS) was selected as the framework representing DPPs due to its viability, as determined by an analysis of its contents.
The implementation encompasses a web-based VA system, integrating a standardized interface into an AAS-based backend. The user interface provides tools to navigate available products, identify high emitting products, track emissions, and identify specific life cycle stages or transport routes that contribute to greenhouse gas emissions. Users can test the impact of changes, such as replacing components, on emissions.
Use case analyses confirmed the system’s effectiveness in supporting sustainable manufacturing despite some limitations due to inconsistencies in the AAS ecosystem. The system consistently provided valuable insights, emphasizing the viability leveraging DPPs to support environmentally responsible, data-driven manufacturing decisions. |
2022
|
27. | | Dean Lloyd Grove (Author); Michael Massoth (Supervisor); Frank Bühler (Co-Supervisor); Dirk Burkhardt (Advisor) Concepts for the Transformation of a Minimum Viable Product for the Automatic Identification of Users into a Scalable Infrastructure Bachelor Thesis Darmstadt University of Applied Sciences, 2022, Bachelor Sc. Thesis. @mastersthesis{Grove2022,
title = {Concepts for the Transformation of a Minimum Viable Product for the Automatic Identification of Users into a Scalable Infrastructure},
author = {Dean Lloyd Grove and Michael Massoth and Frank Bühler and Dirk Burkhardt},
year = {2022},
date = {2022-06-21},
urldate = {2022-06-21},
address = {Darmstadt},
school = {Darmstadt University of Applied Sciences},
abstract = {A Minimum Viable Product (MVP) is a product that was developed in the lean startup method [1] and is used, to develop a product quickly and get feedback from customers or investors, without creating a lot of engineering overhead. In its definition, the MVP uses the least amount of effort, to receive the maximum amount of feedback [1]. While this in itself is amicable, developing something quickly leaves some best practices behind, which this thesis will focus on. By introducing some options that are industry standards or best practices, it will be shown how the product can become more future proof. Be it either by requiring more CPU performance and needing to scale-out, scale-up the services or by showing how correctly managing user access and their roles can be beneficial in the long run. Some regulatory concerns will be evaluated throughout this thesis, when working on sensitive data. Due to the General Data Protection Regulation (GDPR) is extremely important.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
A Minimum Viable Product (MVP) is a product that was developed in the lean startup method [1] and is used, to develop a product quickly and get feedback from customers or investors, without creating a lot of engineering overhead. In its definition, the MVP uses the least amount of effort, to receive the maximum amount of feedback [1]. While this in itself is amicable, developing something quickly leaves some best practices behind, which this thesis will focus on. By introducing some options that are industry standards or best practices, it will be shown how the product can become more future proof. Be it either by requiring more CPU performance and needing to scale-out, scale-up the services or by showing how correctly managing user access and their roles can be beneficial in the long run. Some regulatory concerns will be evaluated throughout this thesis, when working on sensitive data. Due to the General Data Protection Regulation (GDPR) is extremely important. |
26. | | Shahrukh Badar (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Process Mining for Workflow-Driven Assistance in Visual Trend Analytics Master Thesis TU Darmstadt, 2022, Master Sc. Thesis. @mastersthesis{Badar2022,
title = {Process Mining for Workflow-Driven Assistance in Visual Trend Analytics},
author = {Shahrukh Badar and Arjan Kuijper and Dirk Burkhardt},
year = {2022},
date = {2022-04-26},
urldate = {2022-04-26},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {In today’s data-driven world, a large amount of data is being generated daily. This data is generated by different sources, such as social networking platforms, industrial machinery, daily transactions, etc. The companies or businesses are not only generating data but also utilizing them to improve their processes, business decisions, etc. There are several applications and tools that help users to analyze this big data in-depth, by providing numerous ways to explore it, including different types of visualization, pivoting, filtering, grouping data, etc. The challenge with such applications is that it creates long and heavy learning curves for users, who need to work with such applications. Many systems are often designed for a specific purpose, and therewith to know how a single system works is not enough. To enable a better work entrance with such an analytical system, a kind of adaptive assistance would be helpful. So, the system would hint the users regarding his previous work and interaction, what next action might be useful. The thesis aims to face this challenge with process-driven assistance that is applied to the visual trend analytics domain. The goal is, based on previous users interactions and solved tasks, to assist further users in their work. Therefore, a universal visual assistance model is defined and acts also as the main contribution, based on defined interaction event taxonomy. This concept is applied on the Visual Trend Analytics domain on the SciTic reference system, This “SciTic – Visual Trend Analytics” is connected with different data sources and provides analysis of scientific documents. The interaction model provides assistance in terms of recommendations, where the user has an option either to apply a recommendation or ignore it. The solution provided in this thesis is model-based and utilizes the potential of Process Mining and Discovery techniques. It is started by creating an event taxonomy by identifying all possible ways of user interactions on the “SciTic – Visual Trend Analytics” web application. Next, enable the “SciTic – Visual Trend Analytics” web application to start logging events chronologically based on predefined taxonomy. Later, these events log is converted into Process Mining log format. Next, it applies the Process Discovery algorithm “Heuristics Miner” on these log data to generate a process model, which shows the overall flow of user interaction along with the frequencies. Later, this process model is used to provide users with recommendations.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
In today’s data-driven world, a large amount of data is being generated daily. This data is generated by different sources, such as social networking platforms, industrial machinery, daily transactions, etc. The companies or businesses are not only generating data but also utilizing them to improve their processes, business decisions, etc. There are several applications and tools that help users to analyze this big data in-depth, by providing numerous ways to explore it, including different types of visualization, pivoting, filtering, grouping data, etc. The challenge with such applications is that it creates long and heavy learning curves for users, who need to work with such applications. Many systems are often designed for a specific purpose, and therewith to know how a single system works is not enough. To enable a better work entrance with such an analytical system, a kind of adaptive assistance would be helpful. So, the system would hint the users regarding his previous work and interaction, what next action might be useful. The thesis aims to face this challenge with process-driven assistance that is applied to the visual trend analytics domain. The goal is, based on previous users interactions and solved tasks, to assist further users in their work. Therefore, a universal visual assistance model is defined and acts also as the main contribution, based on defined interaction event taxonomy. This concept is applied on the Visual Trend Analytics domain on the SciTic reference system, This “SciTic – Visual Trend Analytics” is connected with different data sources and provides analysis of scientific documents. The interaction model provides assistance in terms of recommendations, where the user has an option either to apply a recommendation or ignore it. The solution provided in this thesis is model-based and utilizes the potential of Process Mining and Discovery techniques. It is started by creating an event taxonomy by identifying all possible ways of user interactions on the “SciTic – Visual Trend Analytics” web application. Next, enable the “SciTic – Visual Trend Analytics” web application to start logging events chronologically based on predefined taxonomy. Later, these events log is converted into Process Mining log format. Next, it applies the Process Discovery algorithm “Heuristics Miner” on these log data to generate a process model, which shows the overall flow of user interaction along with the frequencies. Later, this process model is used to provide users with recommendations. |
25. | | Sibgha Nazir (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Visual Analytics on Enterprise Reports for Investment and Strategical Analysis Master Thesis TU Darmstadt, 2022, Master Sc. Thesis. @mastersthesis{Nazir2022,
title = {Visual Analytics on Enterprise Reports for Investment and Strategical Analysis},
author = {Sibgha Nazir and Arjan Kuijper and Dirk Burkhardt},
year = {2022},
date = {2022-03-28},
urldate = {2022-03-28},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {Given the availability of enormous data in today’s time, suitable analysis techniques and graphical tools are required to derive knowledge in order to make this data useful. Scientists and developers have come up with visual analytical systems that combine machine learning technologies, such as text mining with interactive data visualization, to provide fresh insights into the present and future trends. Data visualization has progressed to become a cutting-edge method for displaying and interacting with graphics on a single screen. Using visualizations, decision-makers may unearth insights in minutes, and teams can spot trends and significant outliers in minutes [1]. A vast variety of automatic data analysis methods have been developed during the previous few decades. For investors, researchers, analysts, and decision-makers, these developments are significant in terms of innovation, technology management, and strategic decision-making.
The financial business is only one of many that will be influenced by the habits of the next generation, and it must be on the lookout for new ideas. Using cutting-edge financial analytics tools will, of course, have a significant commercial impact. Visual analytics, when added to the capabilities, can deliver relevant and helpful insights. By collecting financial internal information from different organizations, putting them in one place, and incorporating visual analytics tools, financial analytics software will address crucial business challenges with unprecedented speed, precision, and ease.
The goal of the thesis is to make use of visual analytics for the fundamental analysis of a business to support investors and business decision-makers. The idea is to collect the financial reports, extract the data and feed them to this visual analytics system. Financial reports are PDF documents published by public companies annually and quarterly which are readily available on companies’ websites containing the values of all financial indicators which fully and vividly paint the picture of a companies’ business. The financial indicators in those reports make the basis of fundamental analysis. The thesis focuses on those manually collected reports from the companies’ websites and conceptualizes and implements a pipeline that gathers text and facts from the reports, processes them, and feeds them to a visual analytics dashboard. Furthermore, the thesis uses state-of-the-art visualization tools and techniques to implement a visual analytics dashboard as the proof of concept and extends the visualization interface with interaction capability by giving them options to choose the parameter of their choice allowing the analyst to filter and view the available data. The dashboard fully integrates with the data transformation pipeline to consume the data that has been collected, structured, and processed and aims to display the financial indicators as well as allow the user to display them graphically. It also implements a user interface for manual data correction ensuring continuous data cleansing.
The presented application makes use of state-of-the-art financial analytics and information visualization techniques to enable visual trend analysis. The application is a great tool for investors and business analysts for gaining insights into a business and analyzing historical trends of its earnings and expenses and several other use-cases where financial reports of the business are a primary source of valuable information.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Given the availability of enormous data in today’s time, suitable analysis techniques and graphical tools are required to derive knowledge in order to make this data useful. Scientists and developers have come up with visual analytical systems that combine machine learning technologies, such as text mining with interactive data visualization, to provide fresh insights into the present and future trends. Data visualization has progressed to become a cutting-edge method for displaying and interacting with graphics on a single screen. Using visualizations, decision-makers may unearth insights in minutes, and teams can spot trends and significant outliers in minutes [1]. A vast variety of automatic data analysis methods have been developed during the previous few decades. For investors, researchers, analysts, and decision-makers, these developments are significant in terms of innovation, technology management, and strategic decision-making.
The financial business is only one of many that will be influenced by the habits of the next generation, and it must be on the lookout for new ideas. Using cutting-edge financial analytics tools will, of course, have a significant commercial impact. Visual analytics, when added to the capabilities, can deliver relevant and helpful insights. By collecting financial internal information from different organizations, putting them in one place, and incorporating visual analytics tools, financial analytics software will address crucial business challenges with unprecedented speed, precision, and ease.
The goal of the thesis is to make use of visual analytics for the fundamental analysis of a business to support investors and business decision-makers. The idea is to collect the financial reports, extract the data and feed them to this visual analytics system. Financial reports are PDF documents published by public companies annually and quarterly which are readily available on companies’ websites containing the values of all financial indicators which fully and vividly paint the picture of a companies’ business. The financial indicators in those reports make the basis of fundamental analysis. The thesis focuses on those manually collected reports from the companies’ websites and conceptualizes and implements a pipeline that gathers text and facts from the reports, processes them, and feeds them to a visual analytics dashboard. Furthermore, the thesis uses state-of-the-art visualization tools and techniques to implement a visual analytics dashboard as the proof of concept and extends the visualization interface with interaction capability by giving them options to choose the parameter of their choice allowing the analyst to filter and view the available data. The dashboard fully integrates with the data transformation pipeline to consume the data that has been collected, structured, and processed and aims to display the financial indicators as well as allow the user to display them graphically. It also implements a user interface for manual data correction ensuring continuous data cleansing.
The presented application makes use of state-of-the-art financial analytics and information visualization techniques to enable visual trend analysis. The application is a great tool for investors and business analysts for gaining insights into a business and analyzing historical trends of its earnings and expenses and several other use-cases where financial reports of the business are a primary source of valuable information. |
2021
|
24. | | Lennart Bijan Sina (Author); Kawa Nazemi (Supervisor); Dirk Burkhardt (Co-Supervisor) Visual Analytics for Unstructured Data and Scalable Data Master Thesis Darmstadt University of Applied Sciences, 2021, Master Sc. Thesis. @mastersthesis{Sina2021,
title = {Visual Analytics for Unstructured Data and Scalable Data},
author = {Lennart Bijan Sina and Kawa Nazemi and Dirk Burkhardt},
year = {2021},
date = {2021-11-04},
urldate = {2021-11-04},
address = {Dieburg, Germany},
school = {Darmstadt University of Applied Sciences},
abstract = {Visual Analytics, as the science of analytical reasoning, combines the strengths of hu-mans and machines. Visual Analytics uses machine learning, artificial intelligence, and user-centered interactive visualization, and even complex analysis tasks can be solved relatively easily. Visual Analytics usually involves using a vast amount of data for analysis. However, the entire data set is not always required for analysis tasks, so a progressive approach is needed that scales data and progressively visualizes the corresponding amount of data for analysis. In this master's thesis, a visual analytics system was conceptualized and implemented that scales data through middleware to enable more efficient analysis. For this purpose, diverse approaches and systems were investigated, which led to a coherent concept. The concept was implemented and connected to an existing database, enabling real-world use of the system and real-world conditions. The scientific contribution of the present work is three-fold: (1) the concept of a visual analytics system to scale data, (2) a novel data model, and (3) a novel and fully implemented visual dashboard that also enables reporting.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Visual Analytics, as the science of analytical reasoning, combines the strengths of hu-mans and machines. Visual Analytics uses machine learning, artificial intelligence, and user-centered interactive visualization, and even complex analysis tasks can be solved relatively easily. Visual Analytics usually involves using a vast amount of data for analysis. However, the entire data set is not always required for analysis tasks, so a progressive approach is needed that scales data and progressively visualizes the corresponding amount of data for analysis. In this master's thesis, a visual analytics system was conceptualized and implemented that scales data through middleware to enable more efficient analysis. For this purpose, diverse approaches and systems were investigated, which led to a coherent concept. The concept was implemented and connected to an existing database, enabling real-world use of the system and real-world conditions. The scientific contribution of the present work is three-fold: (1) the concept of a visual analytics system to scale data, (2) a novel data model, and (3) a novel and fully implemented visual dashboard that also enables reporting. |
23. | | Walter Oster (Author); Kawa Nazemi (Supervisor); Dirk Burkhardt (Co-Supervisor) Interaktive Visualisierung von kausalen Zusammenhängen im Business-Process-Monitoring Bachelor Thesis Darmstadt University of Applied Sciences, 2021, Bachelor Sc. Thesis. @mastersthesis{Oster2021,
title = {Interaktive Visualisierung von kausalen Zusammenhängen im Business-Process-Monitoring},
author = {Walter Oster and Kawa Nazemi and Dirk Burkhardt},
year = {2021},
date = {2021-03-18},
urldate = {2021-03-18},
address = {Dieburg},
school = {Darmstadt University of Applied Sciences},
abstract = {Das Bestreben der Bachelorarbeit ist es, dass Anwender die Webseite zur Fehleranalyse nutzen und kausale Zusammenhängen von einem oder mehreren Fehlern erkennen können und somit eine manuelle Prüfung der einzelnen Logdateien zwangsläufig nicht mehr notwendig wird. Gleichzeitig soll damit der Zeit- und Arbeitsaufwand der Anwender für die Analyse reduziert werden. Zudem ist es wichtig, dass diese Anwendung und dessen Prozessketten ermöglicht und dadurch zukünftig einen reellen Nutzen besitzt. Ein weiteres Ziel ist es mit der entwickelten Lösung auch die Fehleranalyse und Fehlerbehebung einfach zu halten, um komplexe Strukturen aufzulösen und die Strecke einer Prozesskette zu verbessern und somit den Einstieg für neue oder unerfahrene Mitarbeiter zu gewährleisten. Zudem sollen die jeweiligen Anwender, je nach Rolle oder Tätigkeitsbereich, die Möglichkeit besitzen, sich ein individuelles Dashboard mit denen für sie bedeutsamen Anwendungen zusammenzustellen.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Das Bestreben der Bachelorarbeit ist es, dass Anwender die Webseite zur Fehleranalyse nutzen und kausale Zusammenhängen von einem oder mehreren Fehlern erkennen können und somit eine manuelle Prüfung der einzelnen Logdateien zwangsläufig nicht mehr notwendig wird. Gleichzeitig soll damit der Zeit- und Arbeitsaufwand der Anwender für die Analyse reduziert werden. Zudem ist es wichtig, dass diese Anwendung und dessen Prozessketten ermöglicht und dadurch zukünftig einen reellen Nutzen besitzt. Ein weiteres Ziel ist es mit der entwickelten Lösung auch die Fehleranalyse und Fehlerbehebung einfach zu halten, um komplexe Strukturen aufzulösen und die Strecke einer Prozesskette zu verbessern und somit den Einstieg für neue oder unerfahrene Mitarbeiter zu gewährleisten. Zudem sollen die jeweiligen Anwender, je nach Rolle oder Tätigkeitsbereich, die Möglichkeit besitzen, sich ein individuelles Dashboard mit denen für sie bedeutsamen Anwendungen zusammenzustellen. |
22. | | Irtaza Rasheed (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Name Disambiguation on Digital Library Data for an Enhanced Profile Analysis in Visual Trend Analytics Master Thesis TU Darmstadt, 2021, Master Sc. Thesis. @mastersthesis{Rasheed2021,
title = {Name Disambiguation on Digital Library Data for an Enhanced Profile Analysis in Visual Trend Analytics},
author = {Irtaza Rasheed and Arjan Kuijper and Dirk Burkhardt},
year = {2021},
date = {2021-03-12},
urldate = {2021-03-12},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {Name ambiguity is a challenge and critical problem in many applications, such as scientific literature management, trend analysis etc. The main reason of this is due to different name abbreviations, identical names, name misspellings in publications and bibliographies. An author may have multiple names and multiple authors may have the same name. So, when we look for a particular name, many documents containing that person's name may be returned or missed because of the author's different style of writing their name. This can produce name ambiguity which affects the performance of document retrieval, web search, database integration, and may result improper classification of authors. Previously, many clustering-based algorithms have been proposed, but the problem still remains largely unsolved for both research and industry communities, specifically with the fast growth of information available.
The aim of this thesis is the implementation of a universal name disambiguation approach that considers almost any existing property to identify authors. After an author of a paper is identified, the normalized name writing form on the paper is used to refine the author model and even give an overview about the different writing forms of the author's name. This can be achieved by first examine the research on Human-Computer Interaction specifically with focus on (Visual) Trend Analysis. Furthermore, a research on different name disambiguation techniques. After that, building a concept and implementing a generalized method to identify author name and affiliation disambiguation while evaluating different properties.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Name ambiguity is a challenge and critical problem in many applications, such as scientific literature management, trend analysis etc. The main reason of this is due to different name abbreviations, identical names, name misspellings in publications and bibliographies. An author may have multiple names and multiple authors may have the same name. So, when we look for a particular name, many documents containing that person's name may be returned or missed because of the author's different style of writing their name. This can produce name ambiguity which affects the performance of document retrieval, web search, database integration, and may result improper classification of authors. Previously, many clustering-based algorithms have been proposed, but the problem still remains largely unsolved for both research and industry communities, specifically with the fast growth of information available.
The aim of this thesis is the implementation of a universal name disambiguation approach that considers almost any existing property to identify authors. After an author of a paper is identified, the normalized name writing form on the paper is used to refine the author model and even give an overview about the different writing forms of the author's name. This can be achieved by first examine the research on Human-Computer Interaction specifically with focus on (Visual) Trend Analysis. Furthermore, a research on different name disambiguation techniques. After that, building a concept and implementing a generalized method to identify author name and affiliation disambiguation while evaluating different properties. |
2020
|
21. | | Midhad Blazevic (Author); Kawa Nazemi (Supervisor); Dirk Burkhardt (Co-Supervisor) Visual Search & Exploration for Scientific Publications through Similarity Master Thesis Darmstadt University of Applied Sciences, 2020, Master Sc. Thesis. @mastersthesis{Blazevic2020,
title = {Visual Search & Exploration for Scientific Publications through Similarity},
author = {Midhad Blazevic and Kawa Nazemi and Dirk Burkhardt},
year = {2020},
date = {2020-09-07},
urldate = {2020-09-07},
address = {Dieburg},
school = {Darmstadt University of Applied Sciences},
abstract = {This thesis analyzes exploratory search systems which use sophisticated features, visualizations and similarity-based algorithms to enhance exploratory searches. Examining how similarity algorithms are currently used in combination with elements from information retrieval, natural language processing and visualizations, but also examine what exploratory search is, what the requirements are and what makes it so special in modern times. Furthermore, the user himself or herself will be analyzed as user behavior during exploratory searches is a key factor that has to be taken into consideration when looking to optimize the exploratory search process overall. Based on these aspects, means of improvement will be developed and showcased, which will be used to determine if there is an improvement in comparison to other well-known systems. The outcome of this thesis will present a prototype of an exploratory search system along with a practical use case.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
This thesis analyzes exploratory search systems which use sophisticated features, visualizations and similarity-based algorithms to enhance exploratory searches. Examining how similarity algorithms are currently used in combination with elements from information retrieval, natural language processing and visualizations, but also examine what exploratory search is, what the requirements are and what makes it so special in modern times. Furthermore, the user himself or herself will be analyzed as user behavior during exploratory searches is a key factor that has to be taken into consideration when looking to optimize the exploratory search process overall. Based on these aspects, means of improvement will be developed and showcased, which will be used to determine if there is an improvement in comparison to other well-known systems. The outcome of this thesis will present a prototype of an exploratory search system along with a practical use case. |
20. | | Ubaid Rana (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Named-Entity Recognition on Publications and Raw-Text for Meticulous Insight at Visual Trend Analytics Master Thesis TU Darmstadt, 2020, Master Sc. Thesis. @mastersthesis{Rana2019,
title = {Named-Entity Recognition on Publications and Raw-Text for Meticulous Insight at Visual Trend Analytics},
author = {Ubaid Rana and Arjan Kuijper and Dirk Burkhardt},
year = {2020},
date = {2020-03-17},
urldate = {2020-03-17},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {In the modern data-driven era, a massive amount of research documents are available from publicly accessible digital libraries in the form of academic papers, journals and publications. This plethora of data does not lead to new insights or knowledge. Therefore, suitable analysis techniques and graphical tools are needed to derive knowledge in order to get insight of this big data. To address this issue, researchers have developed visual analytical systems along with machine learning methods, e.g text mining with interactive data visualization, which leads to gain new insights of current and upcoming technology trends. These trends are significant for researchers, business analysts, and decision-makers for innovation, technology management and to make strategic decisions.
Nearly every existing search portal uses the traditional meta-information e.g only about the author and title to find the documents that match a search request and overlook the opportunity of extracting content-related information. It limits the possibility of discovering most relevant publications, moreover it lacks the knowledge required for trend analysis. To collect this very concrete information, named entity recognition must be used to be able to better identify the results and trends. The state-of-the-art systems use static approach for named entity recognition which means that upcoming technologies remain undetected. Modern techniques like distant supervision methods leverage big existing community-maintained data sources, such as Wikipedia, to extract entities dynamically. Nonetheless, these methods are still unstable and have never been tried on complex scenarios such as trend analysis before.
The aim of this thesis is to enable entity recognition on both static tables and dynamic community updated data sources like Wikipedia & DBpedia for trend analysis. To accomplish this goal, a model is suggested which enabled entity extraction on DBpedia and translated the extracted entities into interactive visualizations. The analysts can use these visualizations to gain trend insights, evaluate research trends or to analyze prevailing market moods and industry trends.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
In the modern data-driven era, a massive amount of research documents are available from publicly accessible digital libraries in the form of academic papers, journals and publications. This plethora of data does not lead to new insights or knowledge. Therefore, suitable analysis techniques and graphical tools are needed to derive knowledge in order to get insight of this big data. To address this issue, researchers have developed visual analytical systems along with machine learning methods, e.g text mining with interactive data visualization, which leads to gain new insights of current and upcoming technology trends. These trends are significant for researchers, business analysts, and decision-makers for innovation, technology management and to make strategic decisions.
Nearly every existing search portal uses the traditional meta-information e.g only about the author and title to find the documents that match a search request and overlook the opportunity of extracting content-related information. It limits the possibility of discovering most relevant publications, moreover it lacks the knowledge required for trend analysis. To collect this very concrete information, named entity recognition must be used to be able to better identify the results and trends. The state-of-the-art systems use static approach for named entity recognition which means that upcoming technologies remain undetected. Modern techniques like distant supervision methods leverage big existing community-maintained data sources, such as Wikipedia, to extract entities dynamically. Nonetheless, these methods are still unstable and have never been tried on complex scenarios such as trend analysis before.
The aim of this thesis is to enable entity recognition on both static tables and dynamic community updated data sources like Wikipedia & DBpedia for trend analysis. To accomplish this goal, a model is suggested which enabled entity extraction on DBpedia and translated the extracted entities into interactive visualizations. The analysts can use these visualizations to gain trend insights, evaluate research trends or to analyze prevailing market moods and industry trends. |
2019
|
19. | | Daniela Keller (Author); Kawa Nazemi (Supervisor); Dirk Burkhardt (Co-Supervisor) Interactive Graphical Database Querying – Understanding SQL-Statements through Graphical Presentations Master Thesis Darmstadt University of Applied Sciences, 2019, Master Sc. Thesis. @mastersthesis{Keller2019,
title = {Interactive Graphical Database Querying – Understanding SQL-Statements through Graphical Presentations},
author = {Daniela Keller and Kawa Nazemi and Dirk Burkhardt},
year = {2019},
date = {2019-09-08},
urldate = {2019-09-08},
address = {Darmstadt},
school = {Darmstadt University of Applied Sciences},
abstract = {The necessity to collect, store, retrieve and analyze data is growing in our society. More and more companies base their decision on data. Therefore, the volume of data collected and analyzed is growing. Nowadays, a lot of job descriptions require applicants to have knowledge in “Structured Query Language” (SQL). Nevertheless, the tasks to query databases occur only sporadic. Often users of databases do learn SQL and are aware of the concepts and semantics of how to query databases, but the syntax to query databases and to write especially complex queries causes problems.
To support new and occasional users in querying databases, a web-based graphical user interface for SQL queries and statements was conceptualized and implemented. The artifact was developed according to the principles of design science research. The requirements were derived based on literature research and the review of commercial tools that are currently available on the market was conducted. Based on the requirements the artifact was conceptualized and implemented. The focus was set on querying relational databases with a browser application. The artifact was evaluated qualitatively with test persons from different fields. The evaluation shows that the artifact does support, facilitate and fastens the querying process for users who use SQL occasionally. The effect to learn SQL by executing queries with the developed artifact was confirmed when considering semantical concepts.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
The necessity to collect, store, retrieve and analyze data is growing in our society. More and more companies base their decision on data. Therefore, the volume of data collected and analyzed is growing. Nowadays, a lot of job descriptions require applicants to have knowledge in “Structured Query Language” (SQL). Nevertheless, the tasks to query databases occur only sporadic. Often users of databases do learn SQL and are aware of the concepts and semantics of how to query databases, but the syntax to query databases and to write especially complex queries causes problems.
To support new and occasional users in querying databases, a web-based graphical user interface for SQL queries and statements was conceptualized and implemented. The artifact was developed according to the principles of design science research. The requirements were derived based on literature research and the review of commercial tools that are currently available on the market was conducted. Based on the requirements the artifact was conceptualized and implemented. The focus was set on querying relational databases with a browser application. The artifact was evaluated qualitatively with test persons from different fields. The evaluation shows that the artifact does support, facilitate and fastens the querying process for users who use SQL occasionally. The effect to learn SQL by executing queries with the developed artifact was confirmed when considering semantical concepts. |
18. | | Viet Anh Ly (Author); Kawa Nazemi (Supervisor); Dirk Burkhardt (Co-Supervisor) Interactive visualization for analytical comparisons Bachelor Thesis Darmstadt University of Applied Sciences, 2019, Bachelor Sc. Thesis. @mastersthesis{Ly2019,
title = {Interactive visualization for analytical comparisons},
author = {Viet Anh Ly and Kawa Nazemi and Dirk Burkhardt},
year = {2019},
date = {2019-08-08},
urldate = {2019-08-08},
address = {Darmstadt},
school = {Darmstadt University of Applied Sciences},
abstract = {Interactive visualization is a branch of computer science and programming that fo-cuses on graphical visualizations and enhancing the way we can access and interact with information. With interactive visualization, it provides a fast and easy way to understand insights based on the data. The visualizations that considered to be in-teractive have to have an aspect of human input, for example clicking on a select box to filter the value or moving the mouse on the graphs to see more detail of the value. Additionally, interactive visualizations should respond quickly enough to show a real relation between input and output.
In this thesis, it introduces an interactive visualization for analytical comparisons. The programming of the visualization not only considers the comparisons but also improves the interactivity so that users can analyze the data in depth. The visualiza-tion is considered as a web application so that it can reach several audiences and enable them to interact with the charts and change the chart type from different datasets. A JavaScript library for building user interfaces and a JavaScript library for visualizing data will be combined for the work. ReactJS and d3.js manage a compre-hensive program with reusable components to achieve an interactive web-based visualization for comparisons.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Interactive visualization is a branch of computer science and programming that fo-cuses on graphical visualizations and enhancing the way we can access and interact with information. With interactive visualization, it provides a fast and easy way to understand insights based on the data. The visualizations that considered to be in-teractive have to have an aspect of human input, for example clicking on a select box to filter the value or moving the mouse on the graphs to see more detail of the value. Additionally, interactive visualizations should respond quickly enough to show a real relation between input and output.
In this thesis, it introduces an interactive visualization for analytical comparisons. The programming of the visualization not only considers the comparisons but also improves the interactivity so that users can analyze the data in depth. The visualiza-tion is considered as a web application so that it can reach several audiences and enable them to interact with the charts and change the chart type from different datasets. A JavaScript library for building user interfaces and a JavaScript library for visualizing data will be combined for the work. ReactJS and d3.js manage a compre-hensive program with reusable components to achieve an interactive web-based visualization for comparisons. |
17. | | Hong Giang Hoang (Author); Kawa Nazemi (Supervisor); Dirk Burkhardt (Co-Supervisor) Interactive Comparative Visualization for Digital Libraries Bachelor Thesis Darmstadt University of Applied Sciences, 2019, Bachelor Sc. Thesis. @mastersthesis{Hoang2019,
title = {Interactive Comparative Visualization for Digital Libraries},
author = {Hong Giang Hoang and Kawa Nazemi and Dirk Burkhardt},
year = {2019},
date = {2019-08-08},
urldate = {2019-08-08},
address = {Darmstadt},
school = {Darmstadt University of Applied Sciences},
abstract = {Nowadays, the development of digital libraries promotes the growth of visualization technologies to support users in studying with increasing amounts of large and complex data. Comparison is familiar and applied in many different domains for various purposes. In working or researching, comparison support people to make decisions, improve the current status of a particular product, or to evaluate results. In data analytics, comparative visualization of two or more objects is getting an increasingly important role. Comparative visualization techniques support users in determining more easily relations, differences, or similarities of different data sets, especially when handling large amounts of complex data.
This thesis proposes a comparative visualization that performs the difference of evolution of two or more queries and provides a novel interactive visualization to encode the difference of the number of publications of associated queries. The application is built based on the API of Vissights, a database of digital libraries. As the publication database includes a lot of additional data attributes, a selection of attributes is used for the visualization to give further insight. The process from raw and structured data to the final view transformation of this comparison tool will be presented. As a result, to provide an interactive visual comparison solution that is visualized in different graphs, whereby the user can change the visual structures at any time. The graphs are represented in a timeline, which allows a user to explore the past and the current number of the documents of associated queries as well as to find trends in the publications.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Nowadays, the development of digital libraries promotes the growth of visualization technologies to support users in studying with increasing amounts of large and complex data. Comparison is familiar and applied in many different domains for various purposes. In working or researching, comparison support people to make decisions, improve the current status of a particular product, or to evaluate results. In data analytics, comparative visualization of two or more objects is getting an increasingly important role. Comparative visualization techniques support users in determining more easily relations, differences, or similarities of different data sets, especially when handling large amounts of complex data.
This thesis proposes a comparative visualization that performs the difference of evolution of two or more queries and provides a novel interactive visualization to encode the difference of the number of publications of associated queries. The application is built based on the API of Vissights, a database of digital libraries. As the publication database includes a lot of additional data attributes, a selection of attributes is used for the visualization to give further insight. The process from raw and structured data to the final view transformation of this comparison tool will be presented. As a result, to provide an interactive visual comparison solution that is visualized in different graphs, whereby the user can change the visual structures at any time. The graphs are represented in a timeline, which allows a user to explore the past and the current number of the documents of associated queries as well as to find trends in the publications. |
16. | | Rehman Ahmed Abdul (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Contrasted Data from Science and Web for Advanced Visual Trend Analytics Master Thesis TU Darmstadt, 2019, Master Sc. Thesis. @mastersthesis{Abdul2019,
title = {Contrasted Data from Science and Web for Advanced Visual Trend Analytics},
author = {Rehman Ahmed Abdul and Arjan Kuijper and Dirk Burkhardt},
year = {2019},
date = {2019-07-10},
urldate = {2019-07-10},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {With more publicly accessible digital libraries accessible, a plethora of digital research data is now available for gaining insights into actual and upcoming technology trends. These trends are essential to researchers, business analysts, and decision-makers for making strategic decisions and setting strategic goals. Appropriate processing and graphical analysis methods are required in order to extract meaningful information from the data. In particular, the combination of data mining approaches together with visual analytics leads to real beneficial applications to support decision making in e.g. innovation or technology management.
The data from digital libraries is only limited to research and overlooks the market aspects e.g if the trend is not important for key business players, it is irrelevant for the market. This importance of market aspects creates a demand for validation approaches based on market data. Most of the current market data can be found publically on websites and social networks, e.g. as news from enterprises or on tech review sites or on tech blogs. Therefore, it makes sense to consider this public and social media data as contrasting data to the research digital library data that can be used to validate technology trends.
The goal of this thesis is to enable trend analysis on public and social web data and compare it with retrieved trends based on research library data to enable validation of trends. To achieve this goal a model is proposed that acquires public/social web and digital library data based on user-defined scope called a "campaign", which is then visually transformed from raw data into interactive visualizations passing through different stages of data management, enrichment, transformation, and visual mapping. These interactive visualizations can either be used in insight analysis to gain trend insights for an individual data source or they can be used in comparative analysis with the goal of validating trends from two contrasting data sources.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
With more publicly accessible digital libraries accessible, a plethora of digital research data is now available for gaining insights into actual and upcoming technology trends. These trends are essential to researchers, business analysts, and decision-makers for making strategic decisions and setting strategic goals. Appropriate processing and graphical analysis methods are required in order to extract meaningful information from the data. In particular, the combination of data mining approaches together with visual analytics leads to real beneficial applications to support decision making in e.g. innovation or technology management.
The data from digital libraries is only limited to research and overlooks the market aspects e.g if the trend is not important for key business players, it is irrelevant for the market. This importance of market aspects creates a demand for validation approaches based on market data. Most of the current market data can be found publically on websites and social networks, e.g. as news from enterprises or on tech review sites or on tech blogs. Therefore, it makes sense to consider this public and social media data as contrasting data to the research digital library data that can be used to validate technology trends.
The goal of this thesis is to enable trend analysis on public and social web data and compare it with retrieved trends based on research library data to enable validation of trends. To achieve this goal a model is proposed that acquires public/social web and digital library data based on user-defined scope called a "campaign", which is then visually transformed from raw data into interactive visualizations passing through different stages of data management, enrichment, transformation, and visual mapping. These interactive visualizations can either be used in insight analysis to gain trend insights for an individual data source or they can be used in comparative analysis with the goal of validating trends from two contrasting data sources. |
15. | | Muhammad Ali Riaz (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Visual Trend Analysis on Condensed Expert Data beside Research Library Data for Enhanced Insights Master Thesis TU Darmstadt, 2019, Master Sc. Thesis. @mastersthesis{Riaz2019,
title = {Visual Trend Analysis on Condensed Expert Data beside Research Library Data for Enhanced Insights},
author = {Muhammad Ali Riaz and Arjan Kuijper and Dirk Burkhardt},
year = {2019},
date = {2019-04-08},
urldate = {2019-04-08},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {In the present age of information, we live amidst seas of digital text documents including academic publications, white papers, news articles, patents, newspapers. To tackle the issue of the ever-increasing amount of text documents, researchers from the field of text mining and information visualization have developed tools and techniques to facilitate text analysis. In the context of visual trend analysis on text data, the use of well-structured patent data and public digital libraries are quite established. However, both sources of information have their limitations. For instance, the registration process for patents takes at least one year, which makes the extracted insights not suitable to research on present scenarios. In contrast to patent data, the digital libraries are up-to-date but provide high-level insights, only limited to broader research domains, and the data usage is almost restricted on meta information, such as title, author names and abstract, and they do not provide full text.
For a certain type of detailed analysis such as competitor analysis or portfolio analysis, data from digital libraries is not enough, it would also make sense to analyze the full-text. Even more, it can be beneficial to analyze only a limited dataset that is filtered by an expert towards a very specific field, such as additive printing or smart wearables for medical observations. Sometimes also a mixture of both digital library data and manually collected documents is relevant to be able to validate a certain trend, where one gives a big picture and other gives a very condensed overview of the present scenario.
The thesis aims, therefore, to focus on such manually collected documents by experts that can be defined as condensed data. So, the major goal of this thesis is to conceptualize and implement a solution that enables the creation and analysis of such a condensed data set and compensate therewith the limitations of digital library data analysis. As a result, a visual trend analysis system for analyzing text documents is presented, it utilizes the best of both state-of-the-art text analytics and information visualization techniques. In a nutshell, the presented trend analysis system does two things. Firstly, it is capable of extracting raw data from text documents in the form of unstructured text and meta-data, convert it into structured and analyzable formats, extract trends from it and present it with appropriate visualizations. Secondly, the system is also capable of performing gap-analysis tasks between two data sources, which in this case is digital library data and data from manually collected text documents (Condensed Expert Data). The proposed visual trend analysis system can be used by researchers for analyzing the research trends, organizations to identify current market buzz and industry trends, and many other use-cases where text data is the primary source of valuable information.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
In the present age of information, we live amidst seas of digital text documents including academic publications, white papers, news articles, patents, newspapers. To tackle the issue of the ever-increasing amount of text documents, researchers from the field of text mining and information visualization have developed tools and techniques to facilitate text analysis. In the context of visual trend analysis on text data, the use of well-structured patent data and public digital libraries are quite established. However, both sources of information have their limitations. For instance, the registration process for patents takes at least one year, which makes the extracted insights not suitable to research on present scenarios. In contrast to patent data, the digital libraries are up-to-date but provide high-level insights, only limited to broader research domains, and the data usage is almost restricted on meta information, such as title, author names and abstract, and they do not provide full text.
For a certain type of detailed analysis such as competitor analysis or portfolio analysis, data from digital libraries is not enough, it would also make sense to analyze the full-text. Even more, it can be beneficial to analyze only a limited dataset that is filtered by an expert towards a very specific field, such as additive printing or smart wearables for medical observations. Sometimes also a mixture of both digital library data and manually collected documents is relevant to be able to validate a certain trend, where one gives a big picture and other gives a very condensed overview of the present scenario.
The thesis aims, therefore, to focus on such manually collected documents by experts that can be defined as condensed data. So, the major goal of this thesis is to conceptualize and implement a solution that enables the creation and analysis of such a condensed data set and compensate therewith the limitations of digital library data analysis. As a result, a visual trend analysis system for analyzing text documents is presented, it utilizes the best of both state-of-the-art text analytics and information visualization techniques. In a nutshell, the presented trend analysis system does two things. Firstly, it is capable of extracting raw data from text documents in the form of unstructured text and meta-data, convert it into structured and analyzable formats, extract trends from it and present it with appropriate visualizations. Secondly, the system is also capable of performing gap-analysis tasks between two data sources, which in this case is digital library data and data from manually collected text documents (Condensed Expert Data). The proposed visual trend analysis system can be used by researchers for analyzing the research trends, organizations to identify current market buzz and industry trends, and many other use-cases where text data is the primary source of valuable information. |
2018
|
14. | | Ranveer Purey (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Visual Trend Analysis on Digital Semantic Library Data for Innovation Management Master Thesis TU Darmstadt, 2018, Master Sc. Thesis. @mastersthesis{Purey2018,
title = {Visual Trend Analysis on Digital Semantic Library Data for Innovation Management},
author = {Ranveer Purey and Arjan Kuijper and Dirk Burkhardt},
year = {2018},
date = {2018-11-12},
urldate = {2018-11-12},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {The amount of scientific data published online has been witnessing massive growth in the recent years. This has lead to exponential growth in the amount of data stored in digital libraries (DLs) such as springer, eurographics, digital bibliography and library Project (dblp), etc. One of the major challenges is to prevent users from getting lost in irrelevant search results, when they try to retrieve information in order to get meaningful insights from these digital libraries. This problem is known as information overload. Other challenge is the quality of data in digital libraries. A quality of data can be affected by factors such as missing information, absence of links to external databases or data is not well structured, and the data is not semantically annotated. Apart from data quality, one more challenge is the fact that, there are tools available which help users in retrieving and visualizing the information from large data sets, but these tools lack one or the other basic requirements like data mining, visualizations, interaction techniques etc. These issues and challenges have lead to increase in the research in the field of visual analytics, it is a combination of data processing, information visualization, and human computer interaction disciplines. The main goal of this thesis is to overcome the information overload problem and the challenges mentioned above. This can be achieved by using digital library named SciGraph by springer, which
serves as a very rich source of semantically annotated data. The data from SciGraph can be used in combination with data integration, data mining and information visualization techniques in order to aid users in decision making process and perform visual trend analysis on digital semantic library data. This concept would be designed and developed as a part of innovation management process, which helps transforming innovative ideas into reality using a structured process.
In this thesis, a conceptual model for performing visual trend analysis on digital semantic library data as part of innovation management process had been proposed and implemented. In order to create the conceptual model, several disciplines such as human computer interaction, trend detection methods, user centered design, user experience and innovation management have been researched upon. In addition, evaluation of various information visualization tools for digital libraries has been carried out in order to find out and address the challenges faced by these tools. The conceptual model proposed in this thesis, combines the usage of semantic data with information visualization process and also follows structured innovation management process, in order to find out and address the challenges faced by these tools. The conceptual model proposed in this thesis, combines the usage of semantic data with information visualization process and also follows structured innovation management process, in order to ensure that the concept and implementation (proof of concept) are valid, usable and valuable to the user.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
The amount of scientific data published online has been witnessing massive growth in the recent years. This has lead to exponential growth in the amount of data stored in digital libraries (DLs) such as springer, eurographics, digital bibliography and library Project (dblp), etc. One of the major challenges is to prevent users from getting lost in irrelevant search results, when they try to retrieve information in order to get meaningful insights from these digital libraries. This problem is known as information overload. Other challenge is the quality of data in digital libraries. A quality of data can be affected by factors such as missing information, absence of links to external databases or data is not well structured, and the data is not semantically annotated. Apart from data quality, one more challenge is the fact that, there are tools available which help users in retrieving and visualizing the information from large data sets, but these tools lack one or the other basic requirements like data mining, visualizations, interaction techniques etc. These issues and challenges have lead to increase in the research in the field of visual analytics, it is a combination of data processing, information visualization, and human computer interaction disciplines. The main goal of this thesis is to overcome the information overload problem and the challenges mentioned above. This can be achieved by using digital library named SciGraph by springer, which
serves as a very rich source of semantically annotated data. The data from SciGraph can be used in combination with data integration, data mining and information visualization techniques in order to aid users in decision making process and perform visual trend analysis on digital semantic library data. This concept would be designed and developed as a part of innovation management process, which helps transforming innovative ideas into reality using a structured process.
In this thesis, a conceptual model for performing visual trend analysis on digital semantic library data as part of innovation management process had been proposed and implemented. In order to create the conceptual model, several disciplines such as human computer interaction, trend detection methods, user centered design, user experience and innovation management have been researched upon. In addition, evaluation of various information visualization tools for digital libraries has been carried out in order to find out and address the challenges faced by these tools. The conceptual model proposed in this thesis, combines the usage of semantic data with information visualization process and also follows structured innovation management process, in order to find out and address the challenges faced by these tools. The conceptual model proposed in this thesis, combines the usage of semantic data with information visualization process and also follows structured innovation management process, in order to ensure that the concept and implementation (proof of concept) are valid, usable and valuable to the user. |
13. | | Namitha Chandrashekara (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) User-Centered Scientific Publication Research and Exploration in Digital Libraries Master Thesis TU Darmstadt, 2018, Master Sc. Thesis. @mastersthesis{Chandrashekara2018,
title = {User-Centered Scientific Publication Research and Exploration in Digital Libraries},
author = {Namitha Chandrashekara and Arjan Kuijper and Dirk Burkhardt},
year = {2018},
date = {2018-10-02},
urldate = {2018-10-02},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {Scientific research is the basis for innovations. Surveying the research papers is an essential step in the process of research. It is vital to elaborate the intended writing of state of the art. Due to the rapid growth in scientific and technical discoveries, there is an increasing availability of publications. The traditional method of publishing the research papers includes physical libraries and books. These become hard to document with the rise in the number of publications produced. Due to the above mentioned problem, online archives for scientific publications have become more prominent in the scientific community. The availability of the search engines and digital libraries help the researchers in identifying the scientific publications. However, they provide limited search capabilities and visual interface. Most of the search engines have a single field to search and provides basic filtering of the data. Therefore, even with popular search engines, it is hard for the user to survey the research papers as it limits the user to search based on simple keywords. The relationships across multiple fields of the publications are also not considered such as to find the related papers and papers based on the citations or references.
The main aim of the thesis is to develop a visual access to the digital libraries based on the scientific research and exploration. It helps the user in writing scientific papers. A scientific research and exploration model is developed based on the previous information visualization model for visual trend analysis with digital libraries, and with consideration of the research process. The principles from Visual Seeking Mantra are incorporated to have an interactive user interface that enhances the user experience.
In the scope of this work, a research on Human Computer Interaction, particularly considering the aspects of user interface design are done. An overview of the scientific research, its types and various aspects of data analysis are researched. Different research models, existing approaches and tools that help the researchers in literature survey are also researched. The architecture and the implementation details of scientific research and exploration that provides visual access to digital libraries are presented.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Scientific research is the basis for innovations. Surveying the research papers is an essential step in the process of research. It is vital to elaborate the intended writing of state of the art. Due to the rapid growth in scientific and technical discoveries, there is an increasing availability of publications. The traditional method of publishing the research papers includes physical libraries and books. These become hard to document with the rise in the number of publications produced. Due to the above mentioned problem, online archives for scientific publications have become more prominent in the scientific community. The availability of the search engines and digital libraries help the researchers in identifying the scientific publications. However, they provide limited search capabilities and visual interface. Most of the search engines have a single field to search and provides basic filtering of the data. Therefore, even with popular search engines, it is hard for the user to survey the research papers as it limits the user to search based on simple keywords. The relationships across multiple fields of the publications are also not considered such as to find the related papers and papers based on the citations or references.
The main aim of the thesis is to develop a visual access to the digital libraries based on the scientific research and exploration. It helps the user in writing scientific papers. A scientific research and exploration model is developed based on the previous information visualization model for visual trend analysis with digital libraries, and with consideration of the research process. The principles from Visual Seeking Mantra are incorporated to have an interactive user interface that enhances the user experience.
In the scope of this work, a research on Human Computer Interaction, particularly considering the aspects of user interface design are done. An overview of the scientific research, its types and various aspects of data analysis are researched. Different research models, existing approaches and tools that help the researchers in literature survey are also researched. The architecture and the implementation details of scientific research and exploration that provides visual access to digital libraries are presented. |
12. | | Akshay Madhav Deshmukh (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Automated User Evaluation Analysis for a Simplified and Continuous Software Development Master Thesis TU Darmstadt, 2018, Master Sc. Thesis. @mastersthesis{Deshmukh2018,
title = {Automated User Evaluation Analysis for a Simplified and Continuous Software Development},
author = {Akshay Madhav Deshmukh and Arjan Kuijper and Dirk Burkhardt},
year = {2018},
date = {2018-02-02},
urldate = {2018-02-02},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {In today’s world, computers are tightly coupled with the internet and play a vital role in the development of business and various aspects of human lives. Hence, developing a quality user-computer interface has become a major challenge. Well-designed programs that are easily usable by users are moulded through a regress development life cycle. To ensure a user friendly interface, the interface has to be well designed and need to support smart interaction features. User interface can become an Archilles heel in a developed system because of the simple design mistakes which causes critical interaction problems which eventually leads to massive loss of attractiveness in the system. To overcome this problem, regular and consistent user evaluations have to be carried out to ensure the usability of the system.
The importance of an evaluation for the development of a system is well known. Most of the today’s existing approaches necessitate the users to carry out an evaluation in a laboratory. Evaluators are compelled to dedicate the time in informing the participants about the evaluation process and providing a clear understanding of the questionnaires during the experiment. At the post experiment phase, evaluators have to invest a huge amount of time in generating a result report. On the whole, most of the today’s existing evaluation approaches hogs up too much of time for most developments.
The main aim of this thesis is to develop an automated evaluation management and result analysis, based on a previous developed web-based evaluation system, which enables to elaborate the evaluation results and identify required changes on the developed system. The major idea is that an evaluation can be prepared once and repeated in regular time intervals with different user groups. The automated evaluation result analysis allows to easily check if the continued development lead to better results and if a bunch of given task could be better solved e.g. by added new functions or through enhanced presentation.
Within the scope of this work, Human-Computer Interaction (HCI) was researched, in particular towards User-Centered Design (UCD) and User Evaluation. Different approaches for an evaluation were researched in particular towards an evaluation through expert analysis and user participation. Existing evaluation strategies and solutions, inclined towards distributed evaluations in the form of practical as well as survey based evaluation methods were researched. A proof of concept of an automated evaluation result analysis that enables an easy detection of gaps and improvements in the system was implemented. Finally, the results of the research project Smarter Privacy were compared with the manual performed evaluation.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
In today’s world, computers are tightly coupled with the internet and play a vital role in the development of business and various aspects of human lives. Hence, developing a quality user-computer interface has become a major challenge. Well-designed programs that are easily usable by users are moulded through a regress development life cycle. To ensure a user friendly interface, the interface has to be well designed and need to support smart interaction features. User interface can become an Archilles heel in a developed system because of the simple design mistakes which causes critical interaction problems which eventually leads to massive loss of attractiveness in the system. To overcome this problem, regular and consistent user evaluations have to be carried out to ensure the usability of the system.
The importance of an evaluation for the development of a system is well known. Most of the today’s existing approaches necessitate the users to carry out an evaluation in a laboratory. Evaluators are compelled to dedicate the time in informing the participants about the evaluation process and providing a clear understanding of the questionnaires during the experiment. At the post experiment phase, evaluators have to invest a huge amount of time in generating a result report. On the whole, most of the today’s existing evaluation approaches hogs up too much of time for most developments.
The main aim of this thesis is to develop an automated evaluation management and result analysis, based on a previous developed web-based evaluation system, which enables to elaborate the evaluation results and identify required changes on the developed system. The major idea is that an evaluation can be prepared once and repeated in regular time intervals with different user groups. The automated evaluation result analysis allows to easily check if the continued development lead to better results and if a bunch of given task could be better solved e.g. by added new functions or through enhanced presentation.
Within the scope of this work, Human-Computer Interaction (HCI) was researched, in particular towards User-Centered Design (UCD) and User Evaluation. Different approaches for an evaluation were researched in particular towards an evaluation through expert analysis and user participation. Existing evaluation strategies and solutions, inclined towards distributed evaluations in the form of practical as well as survey based evaluation methods were researched. A proof of concept of an automated evaluation result analysis that enables an easy detection of gaps and improvements in the system was implemented. Finally, the results of the research project Smarter Privacy were compared with the manual performed evaluation. |
2017
|
11. | | Masood Hussain (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) A Distributed Approach for Web-Evaluations of Desktop and Web-Based Applications Master Thesis TU Darmstadt, 2017, Master Sc. Thesis. @mastersthesis{Hussain2017,
title = {A Distributed Approach for Web-Evaluations of Desktop and Web-Based Applications},
author = {Masood Hussain and Arjan Kuijper and Dirk Burkhardt},
year = {2017},
date = {2017-12-03},
urldate = {2017-12-03},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {The process of evaluation is an important concept, and it has a wide range of advantages. The purpose of evaluation can vary from project to project and usually it is used to check system usability, acceptability, and functionality. Evaluation can also be used to compare two existing systems, or to analyse the benefits of the new approach against the existing approaches. Today, there are many existing methods which can be used to perform evaluation. Among the existing methods, the most effective is the distance evaluation method because of being cost-effective. It is easy to manage the distance evaluation and to analyze the result because it follows the already defined procedure. However, the distance evaluation method can only be used to evaluate web applications and is not applicable to desktop applications.
In this thesis, an approach is suggested that will enable the user to perform distance evaluation of the desktop applications. The approach extends the existing Web-based Evaluation of Information Visualization which is currently limited to web applications. The idea is to transfer the desktop application from the host application computer and display it in the web browser without compromising the security of the host computer. For the security purpose, the interaction of the participant is only limited to the application, and the participant cannot use any other feature of the remote host. The suggested approach successfully evaluate the desktop applications.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
The process of evaluation is an important concept, and it has a wide range of advantages. The purpose of evaluation can vary from project to project and usually it is used to check system usability, acceptability, and functionality. Evaluation can also be used to compare two existing systems, or to analyse the benefits of the new approach against the existing approaches. Today, there are many existing methods which can be used to perform evaluation. Among the existing methods, the most effective is the distance evaluation method because of being cost-effective. It is easy to manage the distance evaluation and to analyze the result because it follows the already defined procedure. However, the distance evaluation method can only be used to evaluate web applications and is not applicable to desktop applications.
In this thesis, an approach is suggested that will enable the user to perform distance evaluation of the desktop applications. The approach extends the existing Web-based Evaluation of Information Visualization which is currently limited to web applications. The idea is to transfer the desktop application from the host application computer and display it in the web browser without compromising the security of the host computer. For the security purpose, the interaction of the participant is only limited to the application, and the participant cannot use any other feature of the remote host. The suggested approach successfully evaluate the desktop applications. |
10. | | Vinod Singh Ramalwan (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Visual Trend Analysis for Instant Analysis in Mobile Environments Master Thesis TU Darmstadt, 2017, Master Sc. Thesis. @mastersthesis{Ramalwan2017,
title = {Visual Trend Analysis for Instant Analysis in Mobile Environments},
author = {Vinod Singh Ramalwan and Arjan Kuijper and Dirk Burkhardt},
year = {2017},
date = {2017-11-17},
urldate = {2017-11-17},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {Today’s rapidly changing markets challenge enterprises to stay up and react early to latest trends, such as technologies and market players. In the past, big players like AOL, Yahoo suffered significant impact in their own businesses due to changes in the market. Hence, detection of trends is crucial to consider upcoming trends as soon as possible. In particular, small and medium enterprises need an easy and early trend detection system. However, such systems must be aligned with their daily business.
Analytical observations create significant impact and often prove to be profitable to businesses. However, current solutions need a huge amount of effort to collect data and perform the analysis. To incorporate such a solution, it is essential to have a system that enables to quickly check an upcoming trend without heavy parameterization. More often, analytical observations prove to be especially beneficial in mobile environments, like traveling, where they save time by performing analysis instantly e.g. on a mobile-phone or a tablet. In the past decade, mobile device development has changed the technology market symbolically. However, still today, desktop devices are preferred over mobile devices for visual analysis where device memory, display size, device speed etc. are the major factors contributing to this choice.
In this work, I propose a concept aimed at analyzing visual trends in mobile environments which would assist decision makers and business leaders to perform analysis on demand. The proposed concept not only considers the limitations of mobile devices but also utilizes their special features like gesture interactions. This concept integrates a prospect known as a scenario. Each scenario integrates visual metaphors which altogether complete the profile of a subject. A digital library search about research publications was used to choose the subjects in this case. For e.g. checking interesting topics or analyzing topic leaders of certain research area to estimate a technological trend in terms of their relevance and impact.
Within the scope of this work, Human-Computer Interaction (HCI) was researched in the context of mobile devices and the supplementary features mobile devices provide in contrast to computers. The already existing techniques and solutions for visual trend analysis were evaluated. A proof-of-concept with a web application for mobile devices was implemented. Finally, the developed mobile web application was evaluated against an existing desktop web application
based on same data. This study reveals that the new mobile solution has a significant advantage over the existing desktop solution, and it was preferred by the users. It is concluded that the design, intuitiveness, scenario-based concept of the mobile solution and other complementary features like interactions make it more concrete and goal-oriented.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Today’s rapidly changing markets challenge enterprises to stay up and react early to latest trends, such as technologies and market players. In the past, big players like AOL, Yahoo suffered significant impact in their own businesses due to changes in the market. Hence, detection of trends is crucial to consider upcoming trends as soon as possible. In particular, small and medium enterprises need an easy and early trend detection system. However, such systems must be aligned with their daily business.
Analytical observations create significant impact and often prove to be profitable to businesses. However, current solutions need a huge amount of effort to collect data and perform the analysis. To incorporate such a solution, it is essential to have a system that enables to quickly check an upcoming trend without heavy parameterization. More often, analytical observations prove to be especially beneficial in mobile environments, like traveling, where they save time by performing analysis instantly e.g. on a mobile-phone or a tablet. In the past decade, mobile device development has changed the technology market symbolically. However, still today, desktop devices are preferred over mobile devices for visual analysis where device memory, display size, device speed etc. are the major factors contributing to this choice.
In this work, I propose a concept aimed at analyzing visual trends in mobile environments which would assist decision makers and business leaders to perform analysis on demand. The proposed concept not only considers the limitations of mobile devices but also utilizes their special features like gesture interactions. This concept integrates a prospect known as a scenario. Each scenario integrates visual metaphors which altogether complete the profile of a subject. A digital library search about research publications was used to choose the subjects in this case. For e.g. checking interesting topics or analyzing topic leaders of certain research area to estimate a technological trend in terms of their relevance and impact.
Within the scope of this work, Human-Computer Interaction (HCI) was researched in the context of mobile devices and the supplementary features mobile devices provide in contrast to computers. The already existing techniques and solutions for visual trend analysis were evaluated. A proof-of-concept with a web application for mobile devices was implemented. Finally, the developed mobile web application was evaluated against an existing desktop web application
based on same data. This study reveals that the new mobile solution has a significant advantage over the existing desktop solution, and it was preferred by the users. It is concluded that the design, intuitiveness, scenario-based concept of the mobile solution and other complementary features like interactions make it more concrete and goal-oriented. |
9. | | Marija Schufrin (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Designstudie und Entwicklung von Konzepten zur visuellen Trendanalyse für mobile Umgebungen Master Thesis TU Darmstadt, 2017, Master Sc. Thesis. @mastersthesis{Schufrin2017,
title = {Designstudie und Entwicklung von Konzepten zur visuellen Trendanalyse für mobile Umgebungen},
author = {Marija Schufrin and Arjan Kuijper and Dirk Burkhardt},
year = {2017},
date = {2017-04-24},
urldate = {2017-04-24},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {The rapid rising of data in nearly all sectors of life leads to the endeavor for gaining and using the hidden information from this data. The human cognitive capacity to process information though is limited. Therefore, new techniques and methods are developed to support the human in such kinds of tasks. One facet of searching for information in data is the analytical investigation. Discovering trends and patterns is one fundamental part of this process. Visual Analytics strives to support humans in this process by providing information visualizations and automated methods for data analysis. The prevailing research on this topic is, however, focused on targeting computers and laptops. With the rapid spread of mobile devices in recent years the need for investigating the methods and techniques with regard to the use on mobile devices has raised. The mobile environment however yields new challenges as well as possibilities.
Fraunhofer IGD in Darmstadt has recently developed a software for visual trend analysis based on digital publication databases. The goal of the software is to make rising and vanishing trends perceivable by visualizing the data from the database. This software though, is laid out for the use on desktop computers. Within the scope of this thesis, a design study has been carried out to investigate the design possibilities of implementing the software on mobile devices.
Therefor, the target group (decision-makers) has been described by a model consisting of three characteristics. One of these characteristics is a combination of two traits – gut feeling and mind –, which has been detected as polar aspects. Their advantages and disadvantages strongly depend on the context. In the mobile environment the constantly changing context has strong influence on the user experience. Adaptive mechanisms addressing the mental state of the user (e.g. partial attention) can lead to great advantages concerning the positive user experience.
In the scope of this design study three designs have been developed and investigated, which are respectively focusing on the gut feeling, the mind, and on a combination of both. The concepts have been implemented as prototypes and evaluated in the course of a small controlled experiment.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
The rapid rising of data in nearly all sectors of life leads to the endeavor for gaining and using the hidden information from this data. The human cognitive capacity to process information though is limited. Therefore, new techniques and methods are developed to support the human in such kinds of tasks. One facet of searching for information in data is the analytical investigation. Discovering trends and patterns is one fundamental part of this process. Visual Analytics strives to support humans in this process by providing information visualizations and automated methods for data analysis. The prevailing research on this topic is, however, focused on targeting computers and laptops. With the rapid spread of mobile devices in recent years the need for investigating the methods and techniques with regard to the use on mobile devices has raised. The mobile environment however yields new challenges as well as possibilities.
Fraunhofer IGD in Darmstadt has recently developed a software for visual trend analysis based on digital publication databases. The goal of the software is to make rising and vanishing trends perceivable by visualizing the data from the database. This software though, is laid out for the use on desktop computers. Within the scope of this thesis, a design study has been carried out to investigate the design possibilities of implementing the software on mobile devices.
Therefor, the target group (decision-makers) has been described by a model consisting of three characteristics. One of these characteristics is a combination of two traits – gut feeling and mind –, which has been detected as polar aspects. Their advantages and disadvantages strongly depend on the context. In the mobile environment the constantly changing context has strong influence on the user experience. Adaptive mechanisms addressing the mental state of the user (e.g. partial attention) can lead to great advantages concerning the positive user experience.
In the scope of this design study three designs have been developed and investigated, which are respectively focusing on the gut feeling, the mind, and on a combination of both. The concepts have been implemented as prototypes and evaluated in the course of a small controlled experiment. |
2016
|
8. | | Jerome Möckel (Author); Bernhard Humm (Supervisor); Klaus Frank (Co-Supervisor); Dirk Burkhardt (Advisor) Prozessorientierte Informationsvisualisierung Bachelor Thesis Darmstadt University of Applied Sciences, 2016, Bachelor Sc. Thesis. @mastersthesis{Möckel2009,
title = {Prozessorientierte Informationsvisualisierung},
author = {Jerome Möckel and Bernhard Humm and Klaus Frank and Dirk Burkhardt},
year = {2016},
date = {2016-09-01},
urldate = {2016-09-01},
address = {Darmstadt},
school = {Darmstadt University of Applied Sciences},
abstract = {Ziel der vorliegenden Bachelorarbeit war es, ein Prozessmodell zu entwickeln und dies anschließend zu visualisieren, was Nutzern ein effizienteres und schnelleres Arbeiten durch geeignete Tools ermöglichen soll. Die Erreichung dieses Ziels ist durch mehrere Faktoren bedingt. Zum einen müssen die Anwendungsprozesse erkannt werden, woraufhin sich die Frage stellt, wie man diese Prozesse unterstützen kann. Zum anderen ist es auch wichtig, die Aufgaben zu erfassen, welche der Nutzer zu lösen versucht. Weiterhin spielt es auch eine große Rolle, welche Tools für diese Aufgaben geeignet sind. Eine geeignete Anwendung muss sich also an den Nutzer anpassen, um dessen Einstiegshürden zu minimieren und das effektivere und effizientere Arbeiten zu ermöglichen.
Um das Ziel zu erreichen, wurden die Anwendungsprozesse mithilfe von manuellem Process Mining identifiziert. Anhand dieser Daten konnte ein Prozessmodell entwickelt werden, welches daraufhin durch eine Visualisierung abgebildet wurde. Prozessorientierte Adaption ermöglicht das Observieren der Prozesszustände und der Nutzerinteraktionen. Der Nutzer wird dabei auch durch die Auswahl der geeigneten Tools unterstützt. Dieser adaptive Schritt wird durch die Regeln, die dem entwickelten Prozessmodell zugrunde liegen, bewirkt.
Den Nachweis der Zielerfüllung soll eine abschließende empirische Vergleichsstudie erbringen.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Ziel der vorliegenden Bachelorarbeit war es, ein Prozessmodell zu entwickeln und dies anschließend zu visualisieren, was Nutzern ein effizienteres und schnelleres Arbeiten durch geeignete Tools ermöglichen soll. Die Erreichung dieses Ziels ist durch mehrere Faktoren bedingt. Zum einen müssen die Anwendungsprozesse erkannt werden, woraufhin sich die Frage stellt, wie man diese Prozesse unterstützen kann. Zum anderen ist es auch wichtig, die Aufgaben zu erfassen, welche der Nutzer zu lösen versucht. Weiterhin spielt es auch eine große Rolle, welche Tools für diese Aufgaben geeignet sind. Eine geeignete Anwendung muss sich also an den Nutzer anpassen, um dessen Einstiegshürden zu minimieren und das effektivere und effizientere Arbeiten zu ermöglichen.
Um das Ziel zu erreichen, wurden die Anwendungsprozesse mithilfe von manuellem Process Mining identifiziert. Anhand dieser Daten konnte ein Prozessmodell entwickelt werden, welches daraufhin durch eine Visualisierung abgebildet wurde. Prozessorientierte Adaption ermöglicht das Observieren der Prozesszustände und der Nutzerinteraktionen. Der Nutzer wird dabei auch durch die Auswahl der geeigneten Tools unterstützt. Dieser adaptive Schritt wird durch die Regeln, die dem entwickelten Prozessmodell zugrunde liegen, bewirkt.
Den Nachweis der Zielerfüllung soll eine abschließende empirische Vergleichsstudie erbringen. |
2015
|
7. | | Kamran Yaqub (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Distributed Social Media Analysis on Microblog-Services for Policy Modeling Master Thesis TU Darmstadt, 2015, Master Sc. Thesis. @mastersthesis{Yaqub2018,
title = {Distributed Social Media Analysis on Microblog-Services for Policy Modeling},
author = {Kamran Yaqub and Arjan Kuijper and Dirk Burkhardt},
year = {2015},
date = {2015-12-02},
urldate = {2015-12-02},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {The increase in number of social media users provides the policy maker facility to benefit from this information communication channel. The policy maker from this feedback channel acquires the opinions of citizens and through this active participation of citizens, they are engaged in policy modeling process. The process starts with an acquisition of information, then storage of information and thereafter providing the information for statistics and visualizations. The use case implemented for the proof of concept is Twitter but the proposed system architecture is flexible enough to acquire the information from different social media channels, store the information after extracting the features and applying conversions, provides the information to visualization framework and incorporates distributed processing techniques. There are many existing systems for visualizations and textual content analysis for social media but none of the existing systems provide the facility to explore the whole discussions. The discussion track window provides the policy maker the most relevant and important information in such a way that can be easily explored in roll up and roll down fashion following the strategy to have an overview of information first and then filtered and detailed information on demand. Following this scheme, first topics in hierarchy are presented in discussion track window. Then posts and comments of post along with the sentiment are presented in order of their importance. The dynamic queries using AND, OR and NOT logical operators along with temporal and linked data information is helpful in this regard to get filtered information. The discussion track window eases the task of policy maker to explore information from different dimensions thus analyzing the opinions of citizens to have better orientation towards policy modeling.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
The increase in number of social media users provides the policy maker facility to benefit from this information communication channel. The policy maker from this feedback channel acquires the opinions of citizens and through this active participation of citizens, they are engaged in policy modeling process. The process starts with an acquisition of information, then storage of information and thereafter providing the information for statistics and visualizations. The use case implemented for the proof of concept is Twitter but the proposed system architecture is flexible enough to acquire the information from different social media channels, store the information after extracting the features and applying conversions, provides the information to visualization framework and incorporates distributed processing techniques. There are many existing systems for visualizations and textual content analysis for social media but none of the existing systems provide the facility to explore the whole discussions. The discussion track window provides the policy maker the most relevant and important information in such a way that can be easily explored in roll up and roll down fashion following the strategy to have an overview of information first and then filtered and detailed information on demand. Following this scheme, first topics in hierarchy are presented in discussion track window. Then posts and comments of post along with the sentiment are presented in order of their importance. The dynamic queries using AND, OR and NOT logical operators along with temporal and linked data information is helpful in this regard to get filtered information. The discussion track window eases the task of policy maker to explore information from different dimensions thus analyzing the opinions of citizens to have better orientation towards policy modeling. |
6. | | Sachin Pattan (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Distributed Search Intention Analysis for User-Centered Visualizations Master Thesis TU Darmstadt, 2015, Master Sc. Thesis. @mastersthesis{Pattan2015,
title = {Distributed Search Intention Analysis for User-Centered Visualizations},
author = {Sachin Pattan and Arjan Kuijper and Dirk Burkhardt},
year = {2015},
date = {2015-12-01},
urldate = {2015-12-01},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {In recent years, Web Search Engines (WSEs) are the most used Information Retrieval systems around the world. As the information available increases explosively, it becomes more difficult to fetch the information meeting the preferences. This calls for a deep study of knowledge about the users' pre-knowledge and intentions and hence is a critical area of research in many organizations. There are many existing implementations of search intention analysis in some famous search engines such as Google, Bing etc. There are also several researches which propose few approaches for search intention analysis. Still there is a lack of techniques for intention mining in the field of semantic visualizations as they are designed to provide the visual adaptations especially for Exploratory queries. Hence, it is critical to identify the exploratory and targeted search queries. Also, the advancements in the technologies of distributed software systems make them to be applicable in all the systems which need to do some sort of distribution of load. The search intention analysis requires the distribution of load to do parallel processing of intention mining.
In this thesis, a new approach for classifying the search intention of users' in a distributed set up is described along with a deep study on existing approaches with their comparison. The approach uses the efficient parameters: word frequency, query length and entity matching for differentiating the user query into exploratory, targeted and analysis search queries. As the approach focuses mainly on frequency analysis of the words, the same is done with the help of many sources of information such as Wortschatz frequency service by university of Leipzig and the Microsoft Ngram service. The model is evaluated with the help of a survey tool and few Machine Learning techniques. The survey was conducted with more than hundred users and on evaluating the model with the collected data, the results look satisfactory.},
type = {Master Thesis},
note = {Master Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
In recent years, Web Search Engines (WSEs) are the most used Information Retrieval systems around the world. As the information available increases explosively, it becomes more difficult to fetch the information meeting the preferences. This calls for a deep study of knowledge about the users' pre-knowledge and intentions and hence is a critical area of research in many organizations. There are many existing implementations of search intention analysis in some famous search engines such as Google, Bing etc. There are also several researches which propose few approaches for search intention analysis. Still there is a lack of techniques for intention mining in the field of semantic visualizations as they are designed to provide the visual adaptations especially for Exploratory queries. Hence, it is critical to identify the exploratory and targeted search queries. Also, the advancements in the technologies of distributed software systems make them to be applicable in all the systems which need to do some sort of distribution of load. The search intention analysis requires the distribution of load to do parallel processing of intention mining.
In this thesis, a new approach for classifying the search intention of users' in a distributed set up is described along with a deep study on existing approaches with their comparison. The approach uses the efficient parameters: word frequency, query length and entity matching for differentiating the user query into exploratory, targeted and analysis search queries. As the approach focuses mainly on frequency analysis of the words, the same is done with the help of many sources of information such as Wortschatz frequency service by university of Leipzig and the Microsoft Ngram service. The model is evaluated with the help of a survey tool and few Machine Learning techniques. The survey was conducted with more than hundred users and on evaluating the model with the collected data, the results look satisfactory. |
5. | | Mustapha El Achouri (Author); Woldemar Fuhrmann (Supervisor); Johannes Reichardt (Co-Supervisor); Dirk Burkhardt (Advisor) Context-basierte Open Government Data Visualisierung für Policy Modeling Bachelor Thesis Darmstadt University of Applied Sciences, 2015, Bachelor Sc. Thesis. @mastersthesis{Achouri2015,
title = {Context-basierte Open Government Data Visualisierung für Policy Modeling},
author = {Mustapha El Achouri and Woldemar Fuhrmann and Johannes Reichardt and Dirk Burkhardt},
year = {2015},
date = {2015-03-01},
urldate = {2015-03-01},
address = {Darmstadt},
school = {Darmstadt University of Applied Sciences},
abstract = {Bei schnell zunehmenden Bevölkerungen und Einschränkungen von Ressourcen wollen die Entscheidungsträger gute politische Entscheidungen treffen und Prioritäten einsetzen. Statistische Indikatoren sind ein leistungsfähiges Werkzeug zur Unterstützung von politischen Entscheidungsträgern, sofern sie auf die sich verändernde globale Realität anpassen.
Die statistischen Indikatordaten spielen eine große Rolle bei der Politik-Modellierung auf nationaler und internationaler Ebene. In den letzten Jahren gibt es ein zunehmendes Interesse an der Ergänzung und Steigerung der Qualität der politischen Entscheidungen mit den neusten statischen Informationen aus open Government Data (wie Weltbank, Eurostat, Vereinte Nationen etc.). Diese Daten sind frei verfügbar und kann von Endbenutzern in interaktive Visualisierungen verzehrt werden. Jedoch sind zusätzliche Informationen erforderlich, um den Politiker zu ermöglichen diese Statistiken in Ordnung zu interpretieren und einen Sinn der Rohdaten zu machen. Neben Forschern fordern die Medien, die Zivilgesellschaft und Unternehmensführer mehr und mehr Informationen, um die aktuellen Trends zu beurteilen und die Ergebnisse der verschiedenen Politiken und Entscheidungen zu bewerten.
In diesem Beitrag stellen wir einen Ansatz vor, um diese statischen Indikatoren mit andere Information zu bereichern und damit ein Indikator-Kontextes zu definieren und zu ermitteln.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Bei schnell zunehmenden Bevölkerungen und Einschränkungen von Ressourcen wollen die Entscheidungsträger gute politische Entscheidungen treffen und Prioritäten einsetzen. Statistische Indikatoren sind ein leistungsfähiges Werkzeug zur Unterstützung von politischen Entscheidungsträgern, sofern sie auf die sich verändernde globale Realität anpassen.
Die statistischen Indikatordaten spielen eine große Rolle bei der Politik-Modellierung auf nationaler und internationaler Ebene. In den letzten Jahren gibt es ein zunehmendes Interesse an der Ergänzung und Steigerung der Qualität der politischen Entscheidungen mit den neusten statischen Informationen aus open Government Data (wie Weltbank, Eurostat, Vereinte Nationen etc.). Diese Daten sind frei verfügbar und kann von Endbenutzern in interaktive Visualisierungen verzehrt werden. Jedoch sind zusätzliche Informationen erforderlich, um den Politiker zu ermöglichen diese Statistiken in Ordnung zu interpretieren und einen Sinn der Rohdaten zu machen. Neben Forschern fordern die Medien, die Zivilgesellschaft und Unternehmensführer mehr und mehr Informationen, um die aktuellen Trends zu beurteilen und die Ergebnisse der verschiedenen Politiken und Entscheidungen zu bewerten.
In diesem Beitrag stellen wir einen Ansatz vor, um diese statischen Indikatoren mit andere Information zu bereichern und damit ein Indikator-Kontextes zu definieren und zu ermitteln. |
2014
|
4. | | Christopher Klamm (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Ausprägungen elektronischer Beteiligung in Industrie- und Entwicklungsländern: Ein anwendungsorientierter Vergleich am Beispiel Deutschland und Kenia Bachelor Thesis TU Darmstadt, 2014, Bachelor A. Thesis. @mastersthesis{Klamm2014,
title = {Ausprägungen elektronischer Beteiligung in Industrie- und Entwicklungsländern: Ein anwendungsorientierter Vergleich am Beispiel Deutschland und Kenia},
author = {Christopher Klamm and Arjan Kuijper and Dirk Burkhardt},
year = {2014},
date = {2014-06-01},
urldate = {2014-06-01},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {Political participation is one of the primary pillars of democracy. Therefore, new methods are developed to initiate and improve political participation. The rapid developments of information and communication technologies (ICT) open new possibilities for different forms of participation. The new methods are already integrated in projects around the world. These forms are using ICT as a medium to support political participation (e-participation).
The thesis analyzes the historical and practical development of e-participation in Germany and Kenya, and aims to make a comparison to find differences between an industrial and a developing country, because both have the same responsibility as a democracy to integrated political participation. The thesis will give an historical and practical context of e-participation in both countries. And with the knowledge of the context the practical forms of e-participation will be compared with a classification. The classification includes three aspects of e-participation: the form of political participation, the ICT-form and the needs which will be addressed.
The analysis shows that in Germany more projects are initiated by the state (top-down) than in Kenya (ground-up). The analysis shows also that there are more e-participation projects in Kenya with the participation form transparence through third. In Germany are more consultation projects in contrast to Kenya. If the focus will be on the citizens' needs, it can be identified that the need for security is often addressed in Kenya, whereas in Germany all projects aimed at social needs. In addition, the ICT-Forms alert, FAQ and forum are similar frequented in both countries. However differences can be pronounced in the forms blog and evaluation. There are also stronger differences in the forms chat, survey and game - chat and survey are only used in Germany and game is only integrated in Kenyan projects.},
type = {Bachelor Thesis},
note = {Bachelor A. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Political participation is one of the primary pillars of democracy. Therefore, new methods are developed to initiate and improve political participation. The rapid developments of information and communication technologies (ICT) open new possibilities for different forms of participation. The new methods are already integrated in projects around the world. These forms are using ICT as a medium to support political participation (e-participation).
The thesis analyzes the historical and practical development of e-participation in Germany and Kenya, and aims to make a comparison to find differences between an industrial and a developing country, because both have the same responsibility as a democracy to integrated political participation. The thesis will give an historical and practical context of e-participation in both countries. And with the knowledge of the context the practical forms of e-participation will be compared with a classification. The classification includes three aspects of e-participation: the form of political participation, the ICT-form and the needs which will be addressed.
The analysis shows that in Germany more projects are initiated by the state (top-down) than in Kenya (ground-up). The analysis shows also that there are more e-participation projects in Kenya with the participation form transparence through third. In Germany are more consultation projects in contrast to Kenya. If the focus will be on the citizens' needs, it can be identified that the need for security is often addressed in Kenya, whereas in Germany all projects aimed at social needs. In addition, the ICT-Forms alert, FAQ and forum are similar frequented in both countries. However differences can be pronounced in the forms blog and evaluation. There are also stronger differences in the forms chat, survey and game - chat and survey are only used in Germany and game is only integrated in Kenyan projects. |
2013
|
3. | | Jan Ruben Zilke (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) Visualisierung und Analyse von politischen Daten unter Verwendung von Linked Open Government Data Bachelor Thesis TU Darmstadt, 2013, Bachelor Sc. Thesis. @mastersthesis{Zilke2013,
title = {Visualisierung und Analyse von politischen Daten unter Verwendung von Linked Open Government Data},
author = {Jan Ruben Zilke and Arjan Kuijper and Dirk Burkhardt},
year = {2013},
date = {2013-10-01},
urldate = {2013-10-01},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {Governments' internet usage is increasing substantially. More and more on-line e-Government-services are introduced that spare citizens going to agencies. The internet is also used to strengthen the citizens' interest in political affairs. Governments publish lots of collected data to enable the analysis of the current political situation. Therewith citizens shall be empowered to identify conspicuous features, discover room for improvements and initiate transformation processes, thus engaging with and participating in the political agenda.
To facilitate the analysis of political data, it is valuable to use visualisations. They can help to raise an analysis' efficiency. Currently available visualisations lack in considering the public sectors characteristics. On the one hand this includes not using knowledge that is available in linked open government data. On the other hand visualisations could implement more ways to explicitly promote participation. Another problem is, that many visualisations don't make use of interaction tools extensively.
To counter this lack, in this thesis a model for visually supporting policy modeling is presented, that considers the characteristics of e-Government and the related data in the form of linked open government data. Diverse interaction options are used to design an effective way for a visual analysis. Furthermore an important component of this model is the integration of participation tools to create straightforward possibilities to share gained knowledge and discuss proposals for solutions.
The thesis is structured as follows. After a short introduction into the work's objectives, e-Government and its characteristics will be presented. Subsequently e-Government related data and data formats will be analysed. The following chapter will inform about how public authorities use visualisations so far and will give advices for future improvements. This advices and the considerations about the context of e-Government will then be used to develop a model for visually supporting policy modeling. Afterwards this model will be evaluated theoretically on the basis of the FUPOL project before a conclusion will form the completion of the thesis.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Governments' internet usage is increasing substantially. More and more on-line e-Government-services are introduced that spare citizens going to agencies. The internet is also used to strengthen the citizens' interest in political affairs. Governments publish lots of collected data to enable the analysis of the current political situation. Therewith citizens shall be empowered to identify conspicuous features, discover room for improvements and initiate transformation processes, thus engaging with and participating in the political agenda.
To facilitate the analysis of political data, it is valuable to use visualisations. They can help to raise an analysis' efficiency. Currently available visualisations lack in considering the public sectors characteristics. On the one hand this includes not using knowledge that is available in linked open government data. On the other hand visualisations could implement more ways to explicitly promote participation. Another problem is, that many visualisations don't make use of interaction tools extensively.
To counter this lack, in this thesis a model for visually supporting policy modeling is presented, that considers the characteristics of e-Government and the related data in the form of linked open government data. Diverse interaction options are used to design an effective way for a visual analysis. Furthermore an important component of this model is the integration of participation tools to create straightforward possibilities to share gained knowledge and discuss proposals for solutions.
The thesis is structured as follows. After a short introduction into the work's objectives, e-Government and its characteristics will be presented. Subsequently e-Government related data and data formats will be analysed. The following chapter will inform about how public authorities use visualisations so far and will give advices for future improvements. This advices and the considerations about the context of e-Government will then be used to develop a model for visually supporting policy modeling. Afterwards this model will be evaluated theoretically on the basis of the FUPOL project before a conclusion will form the completion of the thesis. |
2011
|
2. | | Christian Glaser (Author); Arjan Kuijper (Supervisor); Dirk Burkhardt (Advisor/Co-Supervisor) System zur benutzerbezogenen Interaktion Bachelor Thesis TU Darmstadt, 2011, Bachelor Sc. Thesis. @mastersthesis{Fritz2009b,
title = {System zur benutzerbezogenen Interaktion},
author = {Christian Glaser and Arjan Kuijper and Dirk Burkhardt},
year = {2011},
date = {2011-06-01},
urldate = {2011-06-01},
address = {Darmstadt},
school = {TU Darmstadt},
abstract = {Moderne Interaktionsgeräte orientieren sich bei der Interaktion zunehmend an der natürlich menschlichen Interaktion, wie beispielsweise über Gesten. Dieser Trend begann 2006 insb. durch den Verkaufserfolg der Spielekonsole Wii von Nintendo und dem dazugehörigen gestenbasierten Controller, der WiiMote. Hierbei war es erstmals möglich Spiele komplett über Gesten zu bedienen. Der Vorteil dieser modernen Interaktion ist die einfachere und leicht verständlichere Kommunikation zwischen und Mensch und Maschine.
Im Gegensatz zu solchen proprietären Systemen, die mittels speziell aufeinander abgestimmten Hardware und Software arbeiten, sind im PC Bereich gestenbasierte Steuerungen selten verwendet. Der Grund liegt in einer fehlenden zentralen Unterstützung, so dass Anwendungsentwickler solche Geräte für jede Anwendung individuell berücksichtigen müssen. Dies stellt aber einen großen zusätzlichen Aufwand dar, der selten im Rahmen der Softwareentwicklung berücksichtigt wird. Aus diesem Grund gibt es auch nur wenige Umsetzungen von Anwendungen, die alternative Eingabegeräte unterstützen. Zumeist handelt es sich bei solchen Anwendungen um Spezialsoftware, für deren Zweck die Eingabegeräte und Einbindung der Geräte optimiert sind. Die vorliegende Arbeit sieht daher als Ziel die Entwicklung eines Konzepts für eine zentrale Unterstützung moderner alternativer Eingabegeräte. Hierfür muss zusätzlich eine Klassifikation entwickelt werden, die eine Organisation der verschiedenen Eingabegeräte unter Berücksichtigung der Interaktionsmethoden und benötigtem Interaktionsergebnis (z.B. Koordinaten, erkannte Geste) erlaubt und so eine spätere Einbindung in eine Anwendung vereinfacht und fördert. Mittels dieses Konzepts ist es möglich ein System zu entwickeln, das alle Geräte und deren Interaktionsformen organisiert. Ein Entwickler ist damit in der Lage, unter Verwendung der bereitgestellten Schnittstellen, diese Eingabegeräte in seine Anwendung einzubinden und sie als Kommunikationsmittel für den Nutzer bereitstellen.
Zur Entwicklung eines solchen Systems werden zunächst theoretische Ansätze aus dem Bereich der Mensch-Computer-Interaktion betrachtet, die als Grundlage für die methodische Anwendung dienen soll. Dies umfasst auch eine Betrachtung aktuell verfügbarer alternativer Eingabegeräte. Ausgehend von den gewonnenen Erkenntnissen werden insbesondere aktuelle Klassifikationskonzepte, sowie Umsetzungen von Interaktionssystemen betrachtet. Die gesammelten Erkenntnisse werden herangezogen, um ein Konzept für ein Interaktionssystem zu entwickelt und deren zugrundeliegenden Organisation der zu unterstützenden Eingabegeräte. Auf Basis dieses Konzeptes folgt abschließend eine Beschreibung der prototypisch technischen Umsetzung, unter der beispielhaften Verwendung der WiiMote.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Moderne Interaktionsgeräte orientieren sich bei der Interaktion zunehmend an der natürlich menschlichen Interaktion, wie beispielsweise über Gesten. Dieser Trend begann 2006 insb. durch den Verkaufserfolg der Spielekonsole Wii von Nintendo und dem dazugehörigen gestenbasierten Controller, der WiiMote. Hierbei war es erstmals möglich Spiele komplett über Gesten zu bedienen. Der Vorteil dieser modernen Interaktion ist die einfachere und leicht verständlichere Kommunikation zwischen und Mensch und Maschine.
Im Gegensatz zu solchen proprietären Systemen, die mittels speziell aufeinander abgestimmten Hardware und Software arbeiten, sind im PC Bereich gestenbasierte Steuerungen selten verwendet. Der Grund liegt in einer fehlenden zentralen Unterstützung, so dass Anwendungsentwickler solche Geräte für jede Anwendung individuell berücksichtigen müssen. Dies stellt aber einen großen zusätzlichen Aufwand dar, der selten im Rahmen der Softwareentwicklung berücksichtigt wird. Aus diesem Grund gibt es auch nur wenige Umsetzungen von Anwendungen, die alternative Eingabegeräte unterstützen. Zumeist handelt es sich bei solchen Anwendungen um Spezialsoftware, für deren Zweck die Eingabegeräte und Einbindung der Geräte optimiert sind. Die vorliegende Arbeit sieht daher als Ziel die Entwicklung eines Konzepts für eine zentrale Unterstützung moderner alternativer Eingabegeräte. Hierfür muss zusätzlich eine Klassifikation entwickelt werden, die eine Organisation der verschiedenen Eingabegeräte unter Berücksichtigung der Interaktionsmethoden und benötigtem Interaktionsergebnis (z.B. Koordinaten, erkannte Geste) erlaubt und so eine spätere Einbindung in eine Anwendung vereinfacht und fördert. Mittels dieses Konzepts ist es möglich ein System zu entwickeln, das alle Geräte und deren Interaktionsformen organisiert. Ein Entwickler ist damit in der Lage, unter Verwendung der bereitgestellten Schnittstellen, diese Eingabegeräte in seine Anwendung einzubinden und sie als Kommunikationsmittel für den Nutzer bereitstellen.
Zur Entwicklung eines solchen Systems werden zunächst theoretische Ansätze aus dem Bereich der Mensch-Computer-Interaktion betrachtet, die als Grundlage für die methodische Anwendung dienen soll. Dies umfasst auch eine Betrachtung aktuell verfügbarer alternativer Eingabegeräte. Ausgehend von den gewonnenen Erkenntnissen werden insbesondere aktuelle Klassifikationskonzepte, sowie Umsetzungen von Interaktionssystemen betrachtet. Die gesammelten Erkenntnisse werden herangezogen, um ein Konzept für ein Interaktionssystem zu entwickelt und deren zugrundeliegenden Organisation der zu unterstützenden Eingabegeräte. Auf Basis dieses Konzeptes folgt abschließend eine Beschreibung der prototypisch technischen Umsetzung, unter der beispielhaften Verwendung der WiiMote. |
2009
|
1. | | Johannes Fritz (Author); Ralf S. Mayer (Supervisor); Steffen Lange (Co-Supervisor); Dirk Burkhardt (Advisor) Intuitives grafisches Editierwerkzeug in Visualisierungen für komplexe, semantische Daten Bachelor Thesis Darmstadt University of Applied Sciences, 2009, Bachelor Sc. Thesis. @mastersthesis{Fritz2009,
title = {Intuitives grafisches Editierwerkzeug in Visualisierungen für komplexe, semantische Daten},
author = {Johannes Fritz and Ralf S. Mayer and Steffen Lange and Dirk Burkhardt},
year = {2009},
date = {2009-00-00},
urldate = {2009-00-00},
address = {Darmstadt},
school = {Darmstadt University of Applied Sciences},
abstract = {Moderne Technologien und Techniken eröffnen stetig neue Möglichkeiten, Informationen im Web zu visualisieren. Dabei spielen immer mehr graphische Visualisierungen, die verbunden mit Modenamen wie etwa Rich Internet Applications (RIA) genutzt werden, eine bedeutende Rolle. Parallel zu dieser Entwicklung im Web, wächst der Bedarf Wissen, zugänglich zu machen und ebenso zu präsentieren. Zu diesem Zweck werden unter anderem Semantik-Visualisierungen eingesetzt, um Kenntnisse und Zusammenhänge in einer Wissensdomäne übersichtlich darzustellen und durch Interaktionstechniken schnell und gezielt abzufragen. Dies hat zur Folge, dass eine immer größere Menge von fachspezifischem Wissen zu Verfügung stehen und aktuell gehalten werden muss. Um die Nachfrage bewältigen zu können, ist das Anreichern des Wissens von sogenannten Domain-Experten unverzichtbar. Fortschrittliche Wissensstrukturen speichern dieses Wissen ab, während Domain-Experten ihr Wissen mit ein bringen und aktualisieren. Die Herausforderung entsteht dadurch, dass Wissen in semantischen und komplexen Strukturen hinterlegt ist, welche sehr spezielle Kenntnisse über den Aufbau einer konzeptionellen Formalisierung eines Wissensbereiches bzw. einer Ontologie zur Strukturierung von Wissen fordern. Aufgrund der Tatsache, dass Domain-Experten keine Ontologie-Experten sind, ist eine abstrahierte Darstellung notwendig, die die Komplexität der Ontologie verbirgt und aufgrund der großen Datenmengen übersichtlich darstellt. Um Bearbeitungen an den Daten vorzunehmen, ist eine Editierfunktionalität notwendig, die als Grundlage die abstrahierte Darstellung verwendet. Das Ziel der vorliegenden Arbeit ist die Entwicklung einer intuitiven Editier-Komponente in web-basierten graphischen Visualisierungen, welche das direkte Bearbeiten der Ontologiedaten auf begrifflicher Ebene erlaubt.
Um ein Verständnis für den Ursprung intuitiver Handlungen in Benutzungsoberflächen zu erhalten, werden zunächst theoretische Grundlagen aus dem Bereich der Mensch-Computer-Interaktion benötigt. Darüber hinaus sollen mit weiteren wissenschaftlichen Erkenntnissen zu der Thematik der Intuitivität eine Basis vermittelt werden, um etablierte intuitiven Metaphern in modernen Benutzungsoberflächen vorzustellen. Der Vergleich und die Bewertung aktueller Editierungswerkzeuge für semantische Daten nach den hier erstellen Kriterien, soll den Anstoß geben, ein Konzept zu entwickelt, welches die intuitive Bearbeitung der Ontologiedaten ermöglicht. Anschließend erfolgt die Implementierung auf Grundlage des Konzeptes. Eine Beurteilung des Systems im Rahmen der Evaluierung schließt diese Arbeit ab.},
type = {Bachelor Thesis},
note = {Bachelor Sc. Thesis},
keywords = {},
pubstate = {published},
tppubtype = {mastersthesis}
}
Moderne Technologien und Techniken eröffnen stetig neue Möglichkeiten, Informationen im Web zu visualisieren. Dabei spielen immer mehr graphische Visualisierungen, die verbunden mit Modenamen wie etwa Rich Internet Applications (RIA) genutzt werden, eine bedeutende Rolle. Parallel zu dieser Entwicklung im Web, wächst der Bedarf Wissen, zugänglich zu machen und ebenso zu präsentieren. Zu diesem Zweck werden unter anderem Semantik-Visualisierungen eingesetzt, um Kenntnisse und Zusammenhänge in einer Wissensdomäne übersichtlich darzustellen und durch Interaktionstechniken schnell und gezielt abzufragen. Dies hat zur Folge, dass eine immer größere Menge von fachspezifischem Wissen zu Verfügung stehen und aktuell gehalten werden muss. Um die Nachfrage bewältigen zu können, ist das Anreichern des Wissens von sogenannten Domain-Experten unverzichtbar. Fortschrittliche Wissensstrukturen speichern dieses Wissen ab, während Domain-Experten ihr Wissen mit ein bringen und aktualisieren. Die Herausforderung entsteht dadurch, dass Wissen in semantischen und komplexen Strukturen hinterlegt ist, welche sehr spezielle Kenntnisse über den Aufbau einer konzeptionellen Formalisierung eines Wissensbereiches bzw. einer Ontologie zur Strukturierung von Wissen fordern. Aufgrund der Tatsache, dass Domain-Experten keine Ontologie-Experten sind, ist eine abstrahierte Darstellung notwendig, die die Komplexität der Ontologie verbirgt und aufgrund der großen Datenmengen übersichtlich darstellt. Um Bearbeitungen an den Daten vorzunehmen, ist eine Editierfunktionalität notwendig, die als Grundlage die abstrahierte Darstellung verwendet. Das Ziel der vorliegenden Arbeit ist die Entwicklung einer intuitiven Editier-Komponente in web-basierten graphischen Visualisierungen, welche das direkte Bearbeiten der Ontologiedaten auf begrifflicher Ebene erlaubt.
Um ein Verständnis für den Ursprung intuitiver Handlungen in Benutzungsoberflächen zu erhalten, werden zunächst theoretische Grundlagen aus dem Bereich der Mensch-Computer-Interaktion benötigt. Darüber hinaus sollen mit weiteren wissenschaftlichen Erkenntnissen zu der Thematik der Intuitivität eine Basis vermittelt werden, um etablierte intuitiven Metaphern in modernen Benutzungsoberflächen vorzustellen. Der Vergleich und die Bewertung aktueller Editierungswerkzeuge für semantische Daten nach den hier erstellen Kriterien, soll den Anstoß geben, ein Konzept zu entwickelt, welches die intuitive Bearbeitung der Ontologiedaten ermöglicht. Anschließend erfolgt die Implementierung auf Grundlage des Konzeptes. Eine Beurteilung des Systems im Rahmen der Evaluierung schließt diese Arbeit ab. |