Size: 2580
Comment: First draft
|
Size: 13479
Comment:
|
Deletions are marked like this. | Additions are marked like this. |
Line 8: | Line 8: |
||'''Description''' ||'''Your interests or skills''' ||'''Supervisor''' || |
||<tablestyle="width: 90%;" style="width: 70%;">'''Description'''||<style="width: 20%;">'''Your interests / skills'''||<style="width: 10%;">'''Supervisor'''|| ||'''Extracting schedule data from PDF timetables:''' Many public transit agencies already publish their schedule data as GTFS feeds or in some proprietary format. The goal of this project and thesis is to extract machine readable schedule data from PDF timetables published for print like [[https://www.rmv.de/c/fileadmin/import/timetable/RMV_Linienfahrplan_G_10_ab_15.12.19_bis_12.12.20_ohne.pdf|this one]]. We expect that good results can already be achieved by using an off-the-shelf tool for extracting tabular data from PDFs (pdftotext). Full station names and coordinates can be extracted from OSM data. || Geo data, schedule data, Hidden Markov Models ||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]|| ||'''Map-Matching Mobile Phones to Public Transit Vehicles:''' Consider the following scenario: you are sitting in a public transit vehicle and you are interested in the available connections at the next stop, or the current delay of your vehicle, or you have changed your desired destination and want to calculate a new route, using the public transit vehicle your are currently in as a starting point. To do all this, your device has to know which vehicle it is currently travelling in. This is a map-matching problem, but the "map" (the positions of all vehicles in the network) is not static, but highly dynamic. The goal of this thesis and project is to develop an app + a dynamic map-matching backend which "snaps" the device to the most likely public transit vehicle. Once such a link is established, the device can be used to improve the real-time information for the vehicle. || Geo data, schedule data. ||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]|| |
Line 12: | Line 13: |
||'''Description''' ||'''Your interests or skills''' ||'''Supervisor''' || |
||<tablestyle="width: 90%;" style="width: 70%;">'''Description'''||<style="width: 20%;">'''Your interests / skills'''||<style="width: 10%;">'''Supervisor'''|| ||'''Question Anwering on !WikiData:''' Make our question answering work on [[https://www.wikidata.org|WikiData]]. !WikiData is currently growing fast and will become the new Freebase. It's an exciting dataset.|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''River Maps:''' The goal of this project is to use our tool [[http://loom.cs.uni-freiburg.de|LOOM]] to render maps of rivers from OSM data. Each river segment should consist of all rivers that contributed to this river so far (for example, beginning at Mannheim, the Neckar should be part of the segment that makes up the Rhine). Think of a single river as a single subway line starting at the source of that river, and the Rhine, for example, as dozens of small subway lines next to each other.|| ||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]|| ||'''Schedule Map Matching with Graphhopper:''' The Graphhopper routing engine has been extended with a map-matching functionality which is based on [[https://www.ismll.uni-hildesheim.de/lehre/semSpatial-10s/script/6.pdf|this Microsoft paper]] in recent years. The goal of this project or thesis is to use Graphhopper to implement a map-matching tool which produces the correct geographical routes (shapes) for schedule data given in the GTFS format. The tool should be evaluated (speed and quality) against our own schedule map matching tool pfaedle. || ||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]|| |
Line 17: | Line 20: |
||'''Description''' ||'''Your interests or skills''' ||'''Supervisor''' || ||'''Merging Overlapping GTFS Feeds (Bachelor project or thesis):''' Many transportation companies publish their timetable data either directly as GTFS feeds or in formats that can be converted to GTFS. As soon as you have two GTFS feeds (two sets of timetable data) that cover either the same or adjacent areas, the problem of duplicate trips arises. You should develop a tool that merges two or more GTFS feeds and solves duplication / fragmentation issues. As a bachelor project or thesis. ||blabla would be helpful, interest in bla ||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]] || |
||<tablestyle="width: 90%;" style="width: 70%;">'''Description'''||<style="width: 20%;">'''Your interests / skills'''||<style="width: 10%;">'''Supervisor'''|| ||'''Merging Overlapping GTFS Feeds (Bachelor project or thesis):''' Many transportation companies publish their timetable data either directly as GTFS feeds or in formats that can be converted to GTFS. As soon as you have two GTFS feeds (two sets of timetable data) that cover either the same or adjacent areas, the problem of duplicate trips arises. You should develop a tool that merges two or more GTFS feeds and solves duplication / fragmentation issues. As a bachelor project or thesis. <<CollapsibleSection("Show / hide details", "BachelorAndMasterProjectsAndTheses/GtfsMerger", 0, "h3", plus=+, minus=-)>>|| ||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]] || |
Line 20: | Line 24: |
||'''Tokenization Repair (project and/or thesis):'''Interesting and well-defined problem, the solution of which is relevant in a variety of information retrieval scenarios. Simple rule-based solutions come to mind easily, but machine learning is key to get very good results. A background in machine learning, or a strong willingness to aquire one as part of the project/thesis, is therefore mandatory for this project. || ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]] || | ||'''Tokenization Repair (project and/or thesis):''' Interesting and well-defined problem, the solution of which is relevant in a variety of information retrieval scenarios. Simple rule-based solutions come to mind easily, but machine learning is key to get very good results. ||A background in machine learning, or a strong willingness to aquire one as part of the project/thesis, is mandatory for this project. ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]] || ||'''Combined SPARQL+Text search (project or thesis):''' This is well-suited as a project (B.Sc. or M.Sc.) but also provides ample opportunity for continuation with a theses (B.Sc. or M.Sc).||You should be fond of good user interfaces and have a good taste concerning layout and colors and such things. You should also like knowledge bases and big datasets and searching in them.||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''Synonym Finder (project and/or theses):''' Find synonyms for all entities from Freebase or !WikiData or Wikipedia. Evaluate the quality and compare it to existing synonym databases like !CrossWikis. Motivation: Most entities are known under several names. For example, a person like "Ellen !DeGeneres" is also known as just "Ellen". A profession like "Astronaut" is also known as "Spaceman", "Spacewoman" or "Cosmsonaut". Knowing these synonyms is key for many NLP (Natural Lanuage Processing) problems. Including complex problems like semantic search and question answering.|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''A Search Engine for !OpenStreetMap Data (project and/or thesis)''': Implement the backend for a search engine for [[http://www.openstreetmap.org|OpenStreetMap]] data. Your server should load an arbitrary .osm file and answer (fuzzy) string searches over the entire dataset. This is a nice project to familiarize yourself with different index structures. Continuation as a thesis is possible.||Preferably you have visited our lecture [[InformationRetrievalWS1617|Information Retrieval]], but this is not required. Code should be written in C++.||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]|| ||'''Tabular Information Extraction (project and/or thesis):''' Extract data from a knowledge base in a tabular format. This could for example be a list of cities with columns for the country they are in, population count and coordinates but really anything fitting in a table should be possible. Think of the typically hand crafted summary tables on Wikipedia. This is well-suited as a project (B.Sc. or M.Sc.) with possible continuation as a thesis.||You should be interested in knowledge bases, big datasets and searching in them.||[[https://ad.informatik.uni-freiburg.de/staff/schnelle|Niklas Schnelle]]|| ||'''Conversational Aqqu (project and/or thesis):''' In this project you will be working on an extension to our Question Answering system Aqqu. This extension will enable follow-up questions and thus a more natural interface.|| ||[[https://ad.informatik.uni-freiburg.de/staff/schnelle|Niklas Schnelle]]|| ||'''A Simple chat bot:''' Build a simple chat bot using deep learning. For a recent example of such a chatbot, see [[https://www.washingtonpost.com/news/to-your-health/wp/2017/12/03/the-woebot-will-see-you-now-the-rise-of-chatbot-therapy|Woebot]]. This topic gives you a lot of freedom concerning the outcome, but it is also free in the sense that you have to research yourself what is out there already and what can realistically be achieved in six months yourself.|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''Error Correction for Question Answering:''' Design and build a system that accepts a (relatively simple) question in natural language and automatically corrects typos etc. This should be realized with a character-based language model learned using deep learning (e.g., with an RNN).|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''Crawling and Analysis of Scientist Homepages:''' Design and build a system that crawls as many university pages and finds as many person mentions as possible and creates a neat knowledge base from this information (Name of Scientist, Affiliation, Gender, Title). In previous work, we tried extracting this information from the !CommonCrawl archive, but that turned out to be much too incomplete and unreliable. This is a challenging problem with large practical value.|| ||[[https://ad.informatik.uni-freiburg.de/staff/schnelle|Niklas Schnelle]]|| ||'''Bitcoin trading app:''' Design and implement a reasonably clever (not to reckless and not to conservative) algorithm for trading bitcoins. Evaluate on historical data and implement an API and a web app to monitor on live data. Take into account all the actual costs incurred (like taxes and fees). Analyse the return of investment that can be reasonably expected.|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''Entity Recognition on a Web Corpus:''' Design and implement a named-entity recognizer for a web-size corpus.|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''Context Decomposition of a Web Corpus:''' Decompose the sentences of a given web-size corpus into their semantic components, using fancy technology developed in our group.|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''Mail Search:''' The goal of this project is a fast and efficient search in very large mail archives. The subtasks are: (1) Write an efficient parser that reads one or files in MBOX format and produces a CSV file with one line per mail and columns for the various structured and unstructured parts of an email (from, to, subject, date, body, ...); (2) take proper care of encoding issues, which are a major issue when dealing with a large number of emails; (3) setup an instance of !CompleteSearch for the data from the CSV file; (4) provide a simple and effective search interface using the instance from 3 as a backend.|| ||[[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]|| ||'''Extracting Words from Text Documents with Complex Layouts (bachelor thesis):''' Design and implement a (learning-based) system for extracting words from layout-based text documents (e.g., PDF documents), which is a surprisingly difficult (but not super-hard) task. The reason is that the text is typically only provided character-wise (and not word-wise) so that word boundaries must be derived from e.g., analyzing the spacings between the characters. Another challenge is that the layout of a text document can be arbitrarily complex, with text arranged in multiple columns and different alignments so that special care is required to not mix up text from different columns.|| ||[[https://ad.informatik.uni-freiburg.de/staff/korzen|Claudius Korzen]]|| ||'''Extracting Special Characters from Layout-Based Text Documents (bachelor thesis):''' Design and implement a (learning-based) system for extracting ''ligatures'' (like fi or ffi) and ''characters with diacritics'' (like á and è) from layout-based text documents (e.g., PDF documents). The challenge here is that such characters can be ''drawn'' into the text, in which case they need to be recognized by analyzing their shapes.|| ||[[https://ad.informatik.uni-freiburg.de/staff/korzen|Claudius Korzen]]|| ||'''GTFS Browser Web App (Bachelor project or thesis):''' Develop a web-application that can be used to analyze huge GTFS datasets. There are already some tools available (for example, [[https://github.com/google/transitfeed/wiki/ScheduleViewer|ScheduleViewer]]) but they all feel and look quite clumsy, are incredible slow and cannot handle large datasets.|| ||[[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]|| |
This page gives an overview of available, ongoing and completed Bachelor's and Master's Projects and Theses at the Chair for Algorithms and Data Structures.
Available projects and theses
Description |
Your interests / skills |
Supervisor |
Extracting schedule data from PDF timetables: Many public transit agencies already publish their schedule data as GTFS feeds or in some proprietary format. The goal of this project and thesis is to extract machine readable schedule data from PDF timetables published for print like this one. We expect that good results can already be achieved by using an off-the-shelf tool for extracting tabular data from PDFs (pdftotext). Full station names and coordinates can be extracted from OSM data. |
Geo data, schedule data, Hidden Markov Models |
|
Map-Matching Mobile Phones to Public Transit Vehicles: Consider the following scenario: you are sitting in a public transit vehicle and you are interested in the available connections at the next stop, or the current delay of your vehicle, or you have changed your desired destination and want to calculate a new route, using the public transit vehicle your are currently in as a starting point. To do all this, your device has to know which vehicle it is currently travelling in. This is a map-matching problem, but the "map" (the positions of all vehicles in the network) is not static, but highly dynamic. The goal of this thesis and project is to develop an app + a dynamic map-matching backend which "snaps" the device to the most likely public transit vehicle. Once such a link is established, the device can be used to improve the real-time information for the vehicle. |
Geo data, schedule data. |
Ongoing projects and theses
Description |
Your interests / skills |
Supervisor |
Question Anwering on WikiData: Make our question answering work on WikiData. WikiData is currently growing fast and will become the new Freebase. It's an exciting dataset. |
|
|
River Maps: The goal of this project is to use our tool LOOM to render maps of rivers from OSM data. Each river segment should consist of all rivers that contributed to this river so far (for example, beginning at Mannheim, the Neckar should be part of the segment that makes up the Rhine). Think of a single river as a single subway line starting at the source of that river, and the Rhine, for example, as dozens of small subway lines next to each other. |
|
|
Schedule Map Matching with Graphhopper: The Graphhopper routing engine has been extended with a map-matching functionality which is based on this Microsoft paper in recent years. The goal of this project or thesis is to use Graphhopper to implement a map-matching tool which produces the correct geographical routes (shapes) for schedule data given in the GTFS format. The tool should be evaluated (speed and quality) against our own schedule map matching tool pfaedle. |
|
Completed projects and theses
For a detailed description of completed projects see our AD Blog. For details about completed theses see our Website.
Description |
Your interests / skills |
Supervisor |
Merging Overlapping GTFS Feeds (Bachelor project or thesis): Many transportation companies publish their timetable data either directly as GTFS feeds or in formats that can be converted to GTFS. As soon as you have two GTFS feeds (two sets of timetable data) that cover either the same or adjacent areas, the problem of duplicate trips arises. You should develop a tool that merges two or more GTFS feeds and solves duplication / fragmentation issues. As a bachelor project or thesis. - Show / hide detailsType: Bachelor project or Bachelor thesis. You should be fond of train schedules and have a basic understanding of GIS and geometrical operations. Programming language of your choice, but performance should be good enough to handle very big datasets. Background info: Many transportation companies publish their timetable data either directly as GTFS feeds or in formats that can be converted to GTFS. As soon as you have two GTFS feeds (two sets of timetable data) that cover either the same or adjacent areas, the problem of duplicate trips arises. Consider a schedule published by Deutsche Bahn containing only trains, and a schedule published by the VAG Freiburg, containing busses, trams and the Breisgau-S-Bahn. The S-Bahn is contained in both datasets, but most probably with different IDs, different line names ("BSB" vs "Breisgau-S-Bahn" vs "Zug" vs ...), different station IDs, different station coordinates... consider an even more complicated example, where a train schedule published by DB contains a train from Amsterdam to Zürich. The train is contained in the DB-Feed from Amsterdam to Basel SBB (where it crosses the Swiss Border), but the part in Switzerland is missing. Another dataset, published by the Swiss Federal Railways, contains the same train, but starting only at Basel Bad Bf (the last German station) and ending at Zurich. A third dataset, published by the Nederlandse Spoorwegen, contains the train from Amsterdam to the first station in Germany. If you want to use all the three feeds together, several problems appear: the train is represented two times between Amsterdam and the first station in Germany, two times between Basel Bad Bf and Basel SBB and the information that you can travel through Basel SBB into Switzerland without having to change trains is completely lost. Goal: Your input will be two or more GTFS-feeds, your output will be a single, merged GTFS feed that solves all the problems described above. Specifically, you should analyze the data, think of some equivalency measurements for trips (for example, if a train called "ICE 501" arrives in Basel Bad Bf at 15:44 in feed A and a train "ICE501" departs from Basel Bad Bf at 15:49 in feed B, it is most likely that this is the same train) and merge trips / routes that belong to the same vehicle. Another example: if two trains in A and B serve exactly the same stations at exactly the same arrival / departure times, this is also most likely the same train. You should think of some testing mechanism that makes sure that indeed every connection that was possible in feed A and feed B is still possible in the merged feed, that is no information was lost. Given some overlapping feed that appears in different qualities on both feeds, your tool should also automatically decide which (partial) representation has the better quality (for example, in feed A, no geographical information on the train route ('shape') is present, but in feed B, it is, so use the shape information from feed B). You tool should be able to handle huge datasets (for example, the entire schedule [trains, busses, trams, ferries etc.] of Germany). |
|
|
Extract and Analyze Scientist's Homepages (project and/or thesis): Extract a large number of scientist's homepages from the CommonCrawl web crawl. Extract the central information from these pages, including: name, profession, gender, affiliation. It will be relativel straightforward to get results of medium quality. The challenge is to achieve results of high quality. Machine learning will be crucial to achieve that. Exploring suitable methods is part of the challenge. |
|
|
Tokenization Repair (project and/or thesis): Interesting and well-defined problem, the solution of which is relevant in a variety of information retrieval scenarios. Simple rule-based solutions come to mind easily, but machine learning is key to get very good results. |
A background in machine learning, or a strong willingness to aquire one as part of the project/thesis, is mandatory for this project. |
|
Combined SPARQL+Text search (project or thesis): This is well-suited as a project (B.Sc. or M.Sc.) but also provides ample opportunity for continuation with a theses (B.Sc. or M.Sc). |
You should be fond of good user interfaces and have a good taste concerning layout and colors and such things. You should also like knowledge bases and big datasets and searching in them. |
|
Synonym Finder (project and/or theses): Find synonyms for all entities from Freebase or WikiData or Wikipedia. Evaluate the quality and compare it to existing synonym databases like CrossWikis. Motivation: Most entities are known under several names. For example, a person like "Ellen DeGeneres" is also known as just "Ellen". A profession like "Astronaut" is also known as "Spaceman", "Spacewoman" or "Cosmsonaut". Knowing these synonyms is key for many NLP (Natural Lanuage Processing) problems. Including complex problems like semantic search and question answering. |
|
|
A Search Engine for OpenStreetMap Data (project and/or thesis): Implement the backend for a search engine for OpenStreetMap data. Your server should load an arbitrary .osm file and answer (fuzzy) string searches over the entire dataset. This is a nice project to familiarize yourself with different index structures. Continuation as a thesis is possible. |
Preferably you have visited our lecture Information Retrieval, but this is not required. Code should be written in C++. |
|
Tabular Information Extraction (project and/or thesis): Extract data from a knowledge base in a tabular format. This could for example be a list of cities with columns for the country they are in, population count and coordinates but really anything fitting in a table should be possible. Think of the typically hand crafted summary tables on Wikipedia. This is well-suited as a project (B.Sc. or M.Sc.) with possible continuation as a thesis. |
You should be interested in knowledge bases, big datasets and searching in them. |
|
Conversational Aqqu (project and/or thesis): In this project you will be working on an extension to our Question Answering system Aqqu. This extension will enable follow-up questions and thus a more natural interface. |
|
|
A Simple chat bot: Build a simple chat bot using deep learning. For a recent example of such a chatbot, see Woebot. This topic gives you a lot of freedom concerning the outcome, but it is also free in the sense that you have to research yourself what is out there already and what can realistically be achieved in six months yourself. |
|
|
Error Correction for Question Answering: Design and build a system that accepts a (relatively simple) question in natural language and automatically corrects typos etc. This should be realized with a character-based language model learned using deep learning (e.g., with an RNN). |
|
|
Crawling and Analysis of Scientist Homepages: Design and build a system that crawls as many university pages and finds as many person mentions as possible and creates a neat knowledge base from this information (Name of Scientist, Affiliation, Gender, Title). In previous work, we tried extracting this information from the CommonCrawl archive, but that turned out to be much too incomplete and unreliable. This is a challenging problem with large practical value. |
|
|
Bitcoin trading app: Design and implement a reasonably clever (not to reckless and not to conservative) algorithm for trading bitcoins. Evaluate on historical data and implement an API and a web app to monitor on live data. Take into account all the actual costs incurred (like taxes and fees). Analyse the return of investment that can be reasonably expected. |
|
|
Entity Recognition on a Web Corpus: Design and implement a named-entity recognizer for a web-size corpus. |
|
|
Context Decomposition of a Web Corpus: Decompose the sentences of a given web-size corpus into their semantic components, using fancy technology developed in our group. |
|
|
Mail Search: The goal of this project is a fast and efficient search in very large mail archives. The subtasks are: (1) Write an efficient parser that reads one or files in MBOX format and produces a CSV file with one line per mail and columns for the various structured and unstructured parts of an email (from, to, subject, date, body, ...); (2) take proper care of encoding issues, which are a major issue when dealing with a large number of emails; (3) setup an instance of CompleteSearch for the data from the CSV file; (4) provide a simple and effective search interface using the instance from 3 as a backend. |
|
|
Extracting Words from Text Documents with Complex Layouts (bachelor thesis): Design and implement a (learning-based) system for extracting words from layout-based text documents (e.g., PDF documents), which is a surprisingly difficult (but not super-hard) task. The reason is that the text is typically only provided character-wise (and not word-wise) so that word boundaries must be derived from e.g., analyzing the spacings between the characters. Another challenge is that the layout of a text document can be arbitrarily complex, with text arranged in multiple columns and different alignments so that special care is required to not mix up text from different columns. |
|
|
Extracting Special Characters from Layout-Based Text Documents (bachelor thesis): Design and implement a (learning-based) system for extracting ligatures (like fi or ffi) and characters with diacritics (like á and è) from layout-based text documents (e.g., PDF documents). The challenge here is that such characters can be drawn into the text, in which case they need to be recognized by analyzing their shapes. |
|
|
GTFS Browser Web App (Bachelor project or thesis): Develop a web-application that can be used to analyze huge GTFS datasets. There are already some tools available (for example, ScheduleViewer) but they all feel and look quite clumsy, are incredible slow and cannot handle large datasets. |
|