#acl Claudius Korzen:read,write Patrick Brosi:read,write Axel Lehmann:read,write Natalie Prange:read,write All:read This page gives an overview of available, ongoing and completed Bachelor's and Master's Projects and Theses at the Chair for Algorithms and Data Structures. <> = Available projects and theses = == Extracting schedule data from PDF timetables == Many public transit agencies already publish their schedule data as GTFS feeds or in some proprietary format. The goal of this project and thesis is to extract machine readable schedule data from PDF timetables published for print like [[https://www.rmv.de/c/fileadmin/import/timetable/RMV_Linienfahrplan_G_10_ab_15.12.19_bis_12.12.20_ohne.pdf|this one]]. We expect that good results can already be achieved by using an off-the-shelf tool for extracting tabular data from PDFs (pdftotext). Full station names and coordinates can be extracted from OSM data. An interest in Geo data, schedule data and Hidden Markov Models is beneficial. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Annotating OpenStreetMap data with elevation data == [[http://www.openstreetmap.org|OpenStreetMap]] comes with the attribute {{{height}}} to capture the elevation above sea level for elements. This attribute, however, is seldomly used. The goal of this project is to develop a tool which takes external elevation data from the SRTM project (https://wiki.openstreetmap.org/wiki/SRTM) and annotates the complete OSM data with elevation information (that is, each node should be given a {{{height}}} attribute). The elevation data resolution and accuracy make this a challenging task: for example, rivers should not sometimes flow uphill, nodes on bridges and tunnels should not be assigned the ground elevation, and obvious errors (streets suddenly dropping 5 meters) should be smoothened. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Map-Matching Mobile Phones to Public Transit Vehicles == Consider the following scenario: you are sitting in a public transit vehicle and you are interested in the available connections at the next stop, or the current delay of your vehicle, or you have changed your desired destination and want to calculate a new route, using the public transit vehicle your are currently in as a starting point. To do all this, your device has to know which vehicle it is currently travelling in. This is a map-matching problem, but the "map" (the positions of all vehicles in the network) is not static, but highly dynamic. The goal of this thesis and project is to develop an app + a dynamic map-matching backend which "snaps" the device to the most likely public transit vehicle. Once such a link is established, the device can be used to improve the real-time information for the vehicle (the Google project "Pigeon Transit" did exactly that). An interest in Geo data and schedule data is beneficial. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> = Ongoing projects and theses = == Entity Linking with BERT Embeddings == Recent approaches to Entity Linking make use of BERT word embeddings. In contrast to traditional word embeddings like !Word2Vec, BERT word embeddings are contextualized. That is, the same word has different embeddings, depending on the context it occurs in. This is an interesting property to exploit when trying to disambiguate an entity mention in a text against a knowledge base. The goal of this project is to implement a neural network that uses BERT word embeddings to link entities to their entries in the knowledge base !WikiData. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/prange|Natalie Prange]]. <> == Automated Generation of Rail Noise Maps == Given rail schedule data which has been fully map-matched (for which for each vehicle we know the exact path it takes through a geographical network), it is possible to automatically generate rail noise maps from this data. In this project or bachelor thesis, you are provided with map-matched schedule data from which you should generate such noise maps. Your maps should respect high buildings or large forests which may act as a noise shield. The noise intensity should be estimated by the expected vehicle speed. Your maps should offer the ability to show the noise for adjustable time spans. An interest in Geo data and schedule data is beneficial. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Simplified Package Delivery App == The goal of this bachelor project is to create a web application to streamline package deliveries in large institutions / companies / campuses without a central concierge service. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Question Answering on WikiData == Make our question answering work on [[https://www.wikidata.org|WikiData]]. !WikiData is currently growing fast and will become the new Freebase. It's an exciting dataset. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == River Maps == The goal of this project is to use our tool [[http://loom.cs.uni-freiburg.de|LOOM]] to render maps of rivers from OSM data. Each river segment should consist of all rivers that contributed to this river so far (for example, beginning at Mannheim, the Neckar should be part of the segment that makes up the Rhine). Think of a single river as a single subway line starting at the source of that river, and the Rhine, for example, as dozens of small subway lines next to each other. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Schedule Map Matching with Graphhopper == The Graphhopper routing engine has been extended with a map-matching functionality which is based on [[https://www.ismll.uni-hildesheim.de/lehre/semSpatial-10s/script/6.pdf|this Microsoft paper]] in recent years. The goal of this project or thesis is to use Graphhopper to implement a map-matching tool which produces the correct geographical routes (shapes) for schedule data given in the GTFS format. The tool should be evaluated (speed and quality) against our own schedule map matching tool pfaedle. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Spelling Correction by Neural Machine Translation == In this project or thesis you will use state-of-the-art neural machine translation methods to develop a spell checker. Misspelled text is treated as an artificial language, and the task is to train a neural network that “translates” to correctly spelled text. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/hertel|Matthias Hertel]]. <> == Spelling Correction and Autocompletion for Mobile Devices == Automatic spelling correction and text completion methods help billions of smartphone users every day. The goal of this project or thesis is to develop an Android keyboard which gives accurate corrections and completions efficiently. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/hertel|Matthias Hertel]]. <> == Fast and Accurate Sentence Segmentation == Sentence boundary detection (sometimes called sentence segmentation or sentence splitting) is the task of determining where sentences start and end in a given text. While this seems a simple task at first glance, and rule-based methods work in many cases, there are difficult corner cases where current methods fail. The goal of this project or thesis is to develop a segmentation method based on deep learning, which beats traditional methods in accuracy and is fast enough to be applicable to big datasets. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/hertel|Matthias Hertel]]. <> == Disaggregation of Energy Data == Manufacturers need to know electricity consumption profiles of their machines. However, electricity consumption is usually only measured as an aggregate over the entire factory. The goal of this project or thesis is to use Machine Learning and Deep Learning to predict the per-machine consumption from the aggregated measurement. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/hertel|Matthias Hertel]] and [[mailto:benedikt.koepfer@ise.fraunhofer.de|Benedikt Köpfer (Fraunhofer ISE)]]. <> = Completed projects and theses = For a detailed description of completed projects see our [[https://ad-blog.informatik.uni-freiburg.de/|AD Blog]]. For details about completed theses see our [[https://ad.informatik.uni-freiburg.de/publikationen/bachelor_master_arbeiten|Website]]. == Merging Overlapping GTFS Feeds (Bachelor project or thesis) == Many transportation companies publish their timetable data either directly as GTFS feeds or in formats that can be converted to GTFS. As soon as you have two GTFS feeds (two sets of timetable data) that cover either the same or adjacent areas, the problem of duplicate trips arises. You should develop a tool that merges two or more GTFS feeds and solves duplication / fragmentation issues. As a bachelor project or thesis. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Extract and Analyze Scientist's Homepages (project and/or thesis) == Extract a large number of scientist's homepages from the [[http://commoncrawl.org/|CommonCrawl]] web crawl. Extract the central information from these pages, including: name, profession, gender, affiliation. It will be relativel straightforward to get results of medium quality. The challenge is to achieve results of high quality. Machine learning will be crucial to achieve that. Exploring suitable methods is part of the challenge. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Tokenization Repair (project and/or thesis) == Interesting and well-defined problem, the solution of which is relevant in a variety of information retrieval scenarios. Simple rule-based solutions come to mind easily, but machine learning is key to get very good results. A background in machine learning, or a strong willingness to aquire one as part of the project/thesis, is mandatory for this project. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Combined SPARQL+Text search (project or thesis) == This is well-suited as a project (B.Sc. or M.Sc.) but also provides ample opportunity for continuation with a theses (B.Sc. or M.Sc). You should be fond of good user interfaces and have a good taste concerning layout and colors and such things. You should also like knowledge bases and big datasets and searching in them. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Synonym Finder (project and/or theses) == Find synonyms for all entities from Freebase or !WikiData or Wikipedia. Evaluate the quality and compare it to existing synonym databases like !CrossWikis. Motivation: Most entities are known under several names. For example, a person like "Ellen !DeGeneres" is also known as just "Ellen". A profession like "Astronaut" is also known as "Spaceman", "Spacewoman" or "Cosmsonaut". Knowing these synonyms is key for many NLP (Natural Lanuage Processing) problems. Including complex problems like semantic search and question answering. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == A Search Engine for OpenStreetMap Data (project and/or thesis) == Implement the backend for a search engine for [[http://www.openstreetmap.org|OpenStreetMap]] data. Your server should load an arbitrary .osm file and answer (fuzzy) string searches over the entire dataset. This is a nice project to familiarize yourself with different index structures. Continuation as a thesis is possible.||Preferably you have visited our lecture [[InformationRetrievalWS1617|Information Retrieval]], but this is not required. Code should be written in C++. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <> == Tabular Information Extraction (project and/or thesis) == Extract data from a knowledge base in a tabular format. This could for example be a list of cities with columns for the country they are in, population count and coordinates but really anything fitting in a table should be possible. Think of the typically hand crafted summary tables on Wikipedia. This is well-suited as a project (B.Sc. or M.Sc.) with possible continuation as a thesis. You should be interested in knowledge bases, big datasets and searching in them. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/schnelle|Niklas Schnelle]]. <> == Conversational Aqqu (project and/or thesis) == In this project you will be working on an extension to our Question Answering system Aqqu. This extension will enable follow-up questions and thus a more natural interface. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/schnelle|Niklas Schnelle]]. <> == A Simple chat bot == Build a simple chat bot using deep learning. For a recent example of such a chatbot, see [[https://www.washingtonpost.com/news/to-your-health/wp/2017/12/03/the-woebot-will-see-you-now-the-rise-of-chatbot-therapy|Woebot]]. This topic gives you a lot of freedom concerning the outcome, but it is also free in the sense that you have to research yourself what is out there already and what can realistically be achieved in six months yourself. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Error Correction for Question Answering == Design and build a system that accepts a (relatively simple) question in natural language and automatically corrects typos etc. This should be realized with a character-based language model learned using deep learning (e.g., with an RNN). Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Crawling and Analysis of Scientist Homepages == Design and build a system that crawls as many university pages and finds as many person mentions as possible and creates a neat knowledge base from this information (Name of Scientist, Affiliation, Gender, Title). In previous work, we tried extracting this information from the !CommonCrawl archive, but that turned out to be much too incomplete and unreliable. This is a challenging problem with large practical value. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/schnelle|Niklas Schnelle]]. <> == Bitcoin trading app == Design and implement a reasonably clever (not to reckless and not to conservative) algorithm for trading bitcoins. Evaluate on historical data and implement an API and a web app to monitor on live data. Take into account all the actual costs incurred (like taxes and fees). Analyse the return of investment that can be reasonably expected. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Entity Recognition on a Web Corpus == Design and implement a named-entity recognizer for a web-size corpus. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Context Decomposition of a Web Corpus == Decompose the sentences of a given web-size corpus into their semantic components, using fancy technology developed in our group. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Mail Search == The goal of this project is a fast and efficient search in very large mail archives. The subtasks are: (1) Write an efficient parser that reads one or files in MBOX format and produces a CSV file with one line per mail and columns for the various structured and unstructured parts of an email (from, to, subject, date, body, ...); (2) take proper care of encoding issues, which are a major issue when dealing with a large number of emails; (3) setup an instance of !CompleteSearch for the data from the CSV file; (4) provide a simple and effective search interface using the instance from 3 as a backend. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/bast|Hannah Bast]]. <> == Extracting Words from Text Documents with Complex Layouts (bachelor thesis) == Design and implement a (learning-based) system for extracting words from layout-based text documents (e.g., PDF documents), which is a surprisingly difficult (but not super-hard) task. The reason is that the text is typically only provided character-wise (and not word-wise) so that word boundaries must be derived from e.g., analyzing the spacings between the characters. Another challenge is that the layout of a text document can be arbitrarily complex, with text arranged in multiple columns and different alignments so that special care is required to not mix up text from different columns. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/korzen|Claudius Korzen]]. <> == Extracting Special Characters from Layout-Based Text Documents (bachelor thesis) == Design and implement a (learning-based) system for extracting ''ligatures'' (like fi or ffi) and ''characters with diacritics'' (like á and è) from layout-based text documents (e.g., PDF documents). The challenge here is that such characters can be ''drawn'' into the text, in which case they need to be recognized by analyzing their shapes. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/korzen|Claudius Korzen]]. <> == GTFS Browser Web App (Bachelor project or thesis) == Develop a web-application that can be used to analyze huge GTFS datasets. There are already some tools available (for example, [[https://github.com/google/transitfeed/wiki/ScheduleViewer|ScheduleViewer]]) but they all feel and look quite clumsy, are incredible slow and cannot handle large datasets. Supervised by [[https://ad.informatik.uni-freiburg.de/staff/brosi|Patrick Brosi]]. <>