536 lines
24 KiB
Plaintext
536 lines
24 KiB
Plaintext
---
|
|
title: "Background information about MTT data"
|
|
author: "Nora Wickelmaier"
|
|
date: "`r Sys.Date()`"
|
|
output:
|
|
html_document:
|
|
number_sections: true
|
|
toc: true
|
|
---
|
|
|
|
```{r, include = FALSE}
|
|
# setwd("C:/Users/nwickelmaier/Nextcloud/Documents/MDS/2023ss/60100_master_thesis")
|
|
devtools::load_all("../../../software/mtt")
|
|
```
|
|
|
|
# Log data from the Multi-Touch Table at the HAUM
|
|
|
|
The Multi Touch Table at the Herzog-Anton-Ulrich-Museum (HAUM) in
|
|
Braunschweig gives visitors of the Museum the opportunity to interact with
|
|
67 artworks and 3 tiles containing information about the museum and its
|
|
layout. The table was installed at the institute in October 2016 and since
|
|
November 2016 log files from interactions of visitors of the museum have
|
|
been collected. These log files are in an unstructured format and cannot be
|
|
easily analyzed. The purpose of the following document is to describe how
|
|
the data haven been transformed and which decisions have been made a long
|
|
the way.
|
|
|
|
# Data structure
|
|
|
|
The log files contain lines that indicate the beginning and end of possible
|
|
actions that can be performed when interacting with the artworks on the
|
|
table. The layout of the table looks like 70 pictures have been tossed on a
|
|
large table. Every artwork is visible at the start configuration. People
|
|
can move the pictures on the table, they can be scaled and rotated.
|
|
Additionally, the virtual picture cards can be flipped in order to find
|
|
more information of the artwork on the "back" of the card. One has to press
|
|
a little `i` for more information in one of the bottom corners of the card.
|
|
On the back of the card two (?) to six information cards can be found with
|
|
a teaser text about a certain topic. These topic cards can be opened and a
|
|
hypertext with detailed information pops up. Within these hypertexts
|
|
certain technical terms can be clicked for lay people to get more
|
|
information. This also opens up a pop-up. The events encoded in the raw log
|
|
files therefore have the following structure.
|
|
|
|
```
|
|
"Start Application" --> Start Application
|
|
"Show Application"
|
|
"Transform start" --> Move
|
|
"Transform stop"
|
|
"Show Info" --> Flip Card
|
|
"Show Front"
|
|
"Artwork/OpenCard" --> Open Topic
|
|
"Artwork/CloseCard"
|
|
"ShowPopup" --> Open Popup
|
|
"HidePopup"
|
|
```
|
|
|
|
The right side shows what events can be extracted from these raw lines. The
|
|
"Start Application" is not an event in the original sense since it only
|
|
indicates if the table was started or maybe reset itself. This is not an
|
|
interaction with the table and therefore not interesting in itself. All
|
|
"Start Application" and "Show Application" are therefore excluded from the
|
|
data when further processed and are only in the raw log files.
|
|
|
|
# Parsing the raw log files
|
|
|
|
The first step is to parse the raw log files that are stored by the
|
|
application as text files in a rather unstructured format to a format that
|
|
is better handled. The data are therefore transferred to a spread sheet
|
|
format. The following section describes what problems were encountered
|
|
while doing this.
|
|
|
|
## Corrupt lines
|
|
|
|
When reading the files containing the raw logs into R, a warning appears
|
|
that says
|
|
|
|
```
|
|
Warning messages:
|
|
incomplete final line found on '_2016/2016_11_18-11_31_0.log'
|
|
incomplete final line found on '_2016/2016_11_18-11_38_30.log'
|
|
incomplete final line found on '_2016/2016_11_18-11_40_36.log'
|
|
...
|
|
```
|
|
|
|
When you open these files, it looks like the last line contains some binary
|
|
content. It is unclear why and how this happens. So when reading the data,
|
|
these lines were removed. A warning will be given that indicates how many
|
|
files have been affected.
|
|
|
|
## Units of the variables
|
|
|
|
* Welche Einheit haben x und y? Pixel? --> yes
|
|
* Welche Einheit hat scale? --> some kind if bit, does not matter, when
|
|
calculating a ratio
|
|
* rotation wirklich degree? --> yes
|
|
* Nach welchem Zeitintervall resettet sich der Tisch wieder in die
|
|
Ausgangskonfiguration? --> PM needs to look it up
|
|
|
|
## How unclosed events are handled
|
|
|
|
## How a case is defined
|
|
|
|
* Herausfinden, ob mehr als eine Person am Tisch steht?
|
|
- Sliding window, in der Anzahl von Artworks gezählt wird? Oder wie weit
|
|
angefasste Artworks voneinander entfernt sind?
|
|
- Man kann sowas schon "sehen" in den Logs - aber wie kann ich es
|
|
automatisiert rausziehen? Was ist meine Definition von
|
|
"Interaktionsboost"?
|
|
- Egal wie wir es machen, geht es auf den "Event-Log-Daten"?
|
|
|
|
## Additional meta data
|
|
|
|
* Anreicherung der Log-Daten mit weiteren Metadaten? Was wäre interessant?
|
|
|
|
- Metadata on artworks like, name, artist, type of artwork, epoch, etc.
|
|
- School vacations and holidays
|
|
- Special exhibits at the museum
|
|
- Number of visitors per day (bei Sven noch mal nachhaken?)
|
|
- Age structure of visitors per day?
|
|
- ... ????
|
|
|
|
# Problems and how I handled them
|
|
|
|
This lists some problems with the log data that required decisions. These
|
|
decisions influence the outcome and maybe even the data quality. Hence, I
|
|
tried to document how I handled these problems and explain the decisions I
|
|
made.
|
|
|
|
## Weird behavior of `timeMs` and neg. `duration` values
|
|
|
|
I think the negative duration values happen, when an event starts in one
|
|
log file and completes in another one. The variable `timeMs` seems to be
|
|
continuous within one log file but not over several log files.
|
|
|
|
|
|
```{r, results = FALSE, fig.show = TRUE}
|
|
# Read data
|
|
dat0 <- read.table("data/haum/raw_logfiles_small_2023-09-26_13-50-20.csv", sep = ";",
|
|
header = TRUE)
|
|
dat0$date <- as.POSIXct(dat0$date)
|
|
dat0$glossar <- ifelse(dat0$artwork == "glossar", 1, 0)
|
|
|
|
# Remove irrelevant events
|
|
dat <- subset(dat0, !(dat0$event %in% c("Start Application",
|
|
"Show Application")))
|
|
|
|
# Add trace variable
|
|
artworks <- unique(stats::na.omit(dat$artwork))
|
|
artworks <- artworks[artworks != "glossar"]
|
|
glossar_files <- unique(subset(dat, dat$artwork == "glossar")$popup)
|
|
glossar_dict <- create_glossardict(artworks, glossar_files,
|
|
xmlpath = "data/haum/ContentEyevisit/eyevisit_cards_light/")
|
|
dat1 <- add_trace(dat, glossar_dict)
|
|
|
|
# Close events
|
|
dat2 <- rbind(close_events(dat1, "move"),
|
|
close_events(dat1, "flipCard"),
|
|
close_events(dat1, "openTopic"),
|
|
close_events(dat1, "openPopup"))
|
|
dat2 <- dat2[order(dat2$date.start, dat2$fileId), ]
|
|
|
|
plot(timeMs ~ as.factor(fileId), dat[1:5000,], xlab = "fileId")
|
|
```
|
|
|
|
The boxplot shows that we have a continuous range of values within one log
|
|
file but that `timeMs` does not increase over log files. Since it seems not
|
|
possible to fix this in a consistent way, I set all durations to `NA` where
|
|
`fileId.start` and `fileId.stop` are not identical. I kept `timeMs.start`
|
|
and `timeMs.stop` and also `fileId.start` and `fileId.stop` in the data
|
|
frame, so it is clear why there are no durations. The other
|
|
|
|
NOTE: Part of this problem was that time stamps that are part of the log
|
|
file names are not zero-left-padded and therefore the files were not in the
|
|
correct order when read into R. When zero left padding these file IDs and
|
|
sorting by them and then by `date.start` within, some of the durations are
|
|
exactly fixed. Unfortunately, only three `move` events were fixed, since it
|
|
only fixed irregularities *within* one log file. See below for more
|
|
details.
|
|
|
|
UPDATE: By now I remove all events that span more than one log file. This
|
|
lets me improve speed considerably.
|
|
|
|
## Left padding of file IDs
|
|
|
|
The file names of the raw log files are automatically generated and contain
|
|
a time stamp. This time stamp is not well formed. First, it contains an
|
|
incorrect month. The months go from 0 to 11 which means, that the file name
|
|
`2016_11_15-12_12_57.log` was collected on December 15, 2016 at 12:12 pm.
|
|
Another problem is that the file names are not zero left padded, e.g.,
|
|
`2016_11_15-12_2_57.log`. This file was collected on December 15, 2016 at
|
|
12:02 pm and therefore before the file above. But most sorting algorithms,
|
|
will sort these files in the order shown below. In order to preprocess the
|
|
data and close events that belong together, the data need to be sorted by
|
|
events and artworks repeatedly. In order to get them back in the correct
|
|
time order, it is necessary to order them based on three variables:
|
|
`fileId`, `date.start` and `timeMs`. The file IDs therefore need to
|
|
sort in the correct order (again see below for example). I zero left padded
|
|
the log file names within the data frame using it as an identifier. These
|
|
"file names" do not correspond exactly to the original raw log file names.
|
|
This needs to be kept in mind when doing any kind of matching etc.
|
|
|
|
```
|
|
## what it looked like before left padding
|
|
# 1422 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_2_57.log 2016-12-15 12:12:56 599671 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26874254
|
|
# 1423 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 621 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26523465
|
|
# 1424 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 677 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997736 13.26239605
|
|
# 1425 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 774 Transform start 076 076.xml NA 2092.25 2008.00 0.2999345 13.26239605
|
|
# 1426 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 850 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997107 13.26223362
|
|
# 1427 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_2_57.log 2016-12-15 12:12:57 599916 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997771 13.26523465
|
|
|
|
## what it looks like now
|
|
# 1422 2016_11_15-12_02_57.log 2016-12-15 12:12:56 599671 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26874254
|
|
# 1423 2016_11_15-12_02_57.log 2016-12-15 12:12:57 599916 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997771 13.26523465
|
|
# 1424 2016_11_15-12_12_57.log 2016-12-15 12:12:57 621 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26523465
|
|
# 1425 2016_11_15-12_12_57.log 2016-12-15 12:12:57 677 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997736 13.26239605
|
|
# 1426 2016_11_15-12_12_57.log 2016-12-15 12:12:57 774 Transform start 076 076.xml NA 2092.25 2008.00 0.2999345 13.26239605
|
|
# 1427 2016_11_15-12_12_57.log 2016-12-15 12:12:57 850 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997107 13.26223362
|
|
```
|
|
|
|
## Timestamps repeat
|
|
|
|
The time stamps in the `date` variable record year, month, day, hour,
|
|
minute and seconds. Since one second is not a very short time interval for
|
|
a move on a touch display, this is not fine grained enough to bring events
|
|
into the correct order, meaning there are events from the same log file
|
|
having the same time stamp and even events from different log files having
|
|
the same time stamp. The log files get written about every 10 minutes
|
|
(which can easily be seen when looking at the file names of the raw log
|
|
files). So in order to get events in the correct order, it is necessary to
|
|
first order by file ID, within file ID then sort by time stamp `date` and
|
|
then within these more coarse grained time stamps sort be `timeMs`. But as
|
|
explained above, `timeMs` can only be sorted within one file ID, since they
|
|
do not increase consistently over log files, but have a new setoff for each
|
|
raw log file.
|
|
|
|
## x,y-coordinates outside of display range
|
|
|
|
The display of the Multi-Touch-Table is a 4K-display with 3840 x 2160
|
|
pixels. When you plot the start and stop coordinates, the display is
|
|
clearly to distinguish. However, a lot of points are outside of the display
|
|
range. This can happen, when the art objects are scaled and then moved to
|
|
the very edge of the table. Then it will record pixels outside of the
|
|
table. These are actually valid data points and I will leave them as is.
|
|
|
|
```{r}
|
|
par(mfrow = c(1, 2))
|
|
plot(y.start ~ x.start, dat2)
|
|
abline(v = c(0, 3840), h = c(0, 2160), col = "blue", lwd = 2)
|
|
plot(y.stop ~ x.stop, dat2)
|
|
abline(v = c(0, 3840), h = c(0, 2160), col = "blue", lwd = 2)
|
|
|
|
aggregate(cbind(x.start, x.stop, y.start, y.stop) ~ 1, dat2, mean)
|
|
```
|
|
|
|
## Pop-ups from glossar cannot be assigned to a specific artwork
|
|
|
|
All the information, pictures and texts for the topics and pop-ups are
|
|
stored in
|
|
`/Logfiles/ContentEyevisit/eyevisit_cards_light/<artwork_number>`. Among
|
|
other things, each folder contains XML-files with the information about any
|
|
technical terms that can be opened from the hypertexts on the topic cards.
|
|
Often these information are artwork dependent and then the corresponding
|
|
XML-file is in the folder for this artwork. Sometimes, however, more
|
|
general terms can be opened. In order to avoid multiple files containing
|
|
the same informatione, these were stored in a folder called `glossar` and
|
|
get accessed from there. The raw log files only contain the path to this
|
|
glossar entry and did not record from which artwork it was accessed. I
|
|
tried to assign these glossar entries to the correct artworks. The (very
|
|
heuristic) approach was this:
|
|
|
|
1. Create a lookup table with all XML-file names (possible pop-ups) from
|
|
the glossar folder and what artworks possibly call them. This was stored
|
|
as an `RData` object for easier handling but should maybe be stored in a
|
|
more interoperable format.
|
|
|
|
2. I went through all possible pop-ups in this lookup table and stored the
|
|
artworks that are associated with it.
|
|
|
|
3. I created a sub data frame without move events (since they can never be
|
|
associated with a pop-up) and went through every line and looked up if
|
|
an artwork and a topic card had been opened. If this was the case and a
|
|
glossar entry came up before the artwork was closed again, I assigned
|
|
this artwork to this glossar entry.
|
|
|
|
This is heuristic since it is possible that several topic cards from
|
|
different artworks are opened simultaneously and the glossar pop-up could
|
|
be opened from either one (it could even be more than two, of course). In
|
|
these cases the artwork that was opened closest to the glossar pop-up has
|
|
been assigned, but this can never be completely error free.
|
|
|
|
And this heuristic only assigns a little more than half of the glossar
|
|
entries. Since my heuristic only looks for the last artwork that has been
|
|
opened and if this artwork is a possible candidate it misses all glossar
|
|
pop-ups where another artwork has been opened in between. This is still an
|
|
open TODO to write a more elaborate algorithm.
|
|
|
|
All glossar pop-ups that do not get matched with an artwork are removed
|
|
from the data set with a warning.
|
|
|
|
## Assign a `case` variable based on "time heuristic"
|
|
|
|
One thing needed in order to work with the data set and use it for machine
|
|
learning algorithms like process mining is a variable that tries to
|
|
identify a case. A case variable will structure the data frame in a way
|
|
that navigation behavior can actually be investigated. However, we do not
|
|
know if several people are standing around the table interacting with it or
|
|
just one very active person. The simplest way to define a case variable is
|
|
to just use a time limit between events. This means that when the table has
|
|
not been interacted with for, e.g., 20 seconds than it is assumed that a
|
|
person moved on and a new person started interacting with the table. This
|
|
is the easiest heuristic and implemented at the moment. Process mining
|
|
shows that this simple approach works in a way that the correct process
|
|
gets extracted by the algorithm.
|
|
|
|
In order to investigate user behavior on a more fine grained level, it will
|
|
be necessary to come up with a more elaborate approach. A better, still
|
|
simple approach could be to use this kind of time limit and additionally
|
|
look at the distance between artworks interacted with within one time
|
|
window. When artworks are far apart is seems plausible that more than one
|
|
person interacted with them. Very short time lapses between events on
|
|
different artworks could also be an indicator that more than one person is
|
|
interacting with the table.
|
|
|
|
## Assign a `trace` variable
|
|
|
|
The `trace` variable is supposed to show one interaction trace with one
|
|
artwork. Meaning it starts when an artwork is touched or flipped and stops
|
|
when it is closed again. It is easy to assign a trace from flipping a card
|
|
over opening (maybe several) topics and pop-ups for this artwork card until
|
|
closing this card again. But one would like to assign the same trace to
|
|
move events surrounding this interaction. Again, this is not possible in an
|
|
algorithmic way but only heuristically. I used the `case` variable in order
|
|
to get meaningful units around the artworks.
|
|
|
|
If within one case only a single trace for a single artwork was opened, I
|
|
assigned this trace to the moves associated with this artwork. I (quite
|
|
often) happens that within one case one artwork is opened and closed
|
|
several times, each time starting a new trace. I then assigned all the
|
|
following move events to the trace beforehand. This is, of course,
|
|
arbitrary and could also be handled the other way around.
|
|
|
|
Another possibility is, that an artwork gets moved within one trace without
|
|
being flipped. I then assigned a new trace to this move.
|
|
|
|
This overall worked very well even though it was based on the very
|
|
heuristic approach assigning a case when the table has not been touched for
|
|
20 seconds. It should be kept in mind that the trace assignments for the
|
|
moves will change when case is defined in a different way.
|
|
|
|
## A `move` event does not record any change
|
|
|
|
Most of the events in the log files are move events. Additionally, many of
|
|
these move events are recorded but they do not indicate any change meaning
|
|
the only difference is the time stamp. All other variables indicating moves
|
|
like `x.start` and `x.stop`, `rotation.start` and `rotation.stop` etc. do
|
|
not show any change. They represent about 2/3 of all move events. These
|
|
events are probably short touches of the table without an actual
|
|
interaction. They were therefore removed from the data set.
|
|
|
|
## Events that only close (`date.start` is NA)
|
|
|
|
It looks like there is some kind of log error for the events that do not
|
|
have a start stop. I was able to get rid of most by sorting for `popup` for
|
|
the openPopup events, but there are still some left (50 for the small data
|
|
set, which corresponds to 0.2 per mill). The following example shows that
|
|
artwork "501" gets closed (line 31030) while the pop-up `sommerbau.xml`
|
|
is still opened (line 31027). Then artwork "501" gets opened again
|
|
(line 31035) and after that the pop-up `sommerbau.xml` is closed (line
|
|
31040). This should not be possible and therefore (correctly) two events
|
|
are assigned: One where the pop-up was opened and then not closed (which is
|
|
common) and another one where the pop-up has no start.
|
|
|
|
```{r}
|
|
dat[31000:31019,]
|
|
# Card gets flipped closed before pop-up closes --> log error!
|
|
```
|
|
|
|
I did not check all of these cases (for the complete data set this is
|
|
simply not possible by hand) but just excluded all events that do not have
|
|
a `date.start` since they are hard to interpret. Often they are log errors
|
|
but in some cases they might be resolvable.
|
|
|
|
```{r}
|
|
# remove all events that do not have a `date.start`
|
|
dim(dat2[is.na(dat2$date.start), ])
|
|
dat2 <- dat2[!is.na(dat2$date.start), ]
|
|
```
|
|
|
|
## Card indices go from 0 to 7 (instead of 0 to 5 as expected)
|
|
|
|
See `questions_number-of-cards.R` for more details.
|
|
|
|
I wrote a function that for each artwork extracts the file names of the
|
|
possible topic cards and then looks up which topics have actually been
|
|
displayed on the back of the card. I added an index giving the ordering in
|
|
the index files.
|
|
|
|
The possible values in the variable `topicNumber` range from 0 to 7,
|
|
however, not artwork has more than six different numbers. So I just renamed
|
|
those numbers from 1 to the highest number, e.g., $0,1,2,4,5,6$ was changed
|
|
to $0\to 1,1\to 2,2\to 3,4\to 4,5\to 5,6\to 6$. Next I used the index to
|
|
assign topics and file names to the according pop-ups. This needs to be
|
|
cross checked with the programming, but seems the most plausible approach
|
|
with my current knowledge.
|
|
|
|
## Extracting topics from `index.xml` vs. `<artwork_number>.xml`
|
|
|
|
When I extract the topics from `index.html` I get different topics, than
|
|
when I get them from `<artwork>.html`. At first glance, it looks like using
|
|
`index.html` actually gives the wrong results.
|
|
|
|
```{r}
|
|
artworks <- unique(dat2$artwork)
|
|
path <- "data/haum/ContentEyevisit/eyevisit_cards_light/"
|
|
topics <- extract_topics(artworks, rep("index.xml", length(artworks)), path)
|
|
topics2 <- extract_topics(artworks, paste0(artworks, ".xml"), path)
|
|
|
|
topics[!topics$file_name %in% topics2$file_name, ]
|
|
topics2[!topics2$file_name %in% topics$file_name, ]
|
|
```
|
|
|
|
For artwork "031", `index.html` only defines 5 cards (the 6th is commented
|
|
out), but `topicNumber` for this artwork has 6 different entries. I will
|
|
therefore extract the topics from `<artwork>.html`. (This seems also better
|
|
compatible with other data sets like 8o8m.)
|
|
|
|
## New artworks "504" and "505" starting October 2022
|
|
|
|
When I read in the complete data frame for the first time, all of the
|
|
sudden there were 72 instead of 70 artworks. It seems like these two
|
|
artworks appear on October 21, 2022.
|
|
|
|
```{r}
|
|
dat0 <- read.table("data/haum/raw_logfiles_2023-09-23_01-31-30.csv",
|
|
sep = ";", header = TRUE)
|
|
dat0$date <- as.POSIXct(dat0$date)
|
|
dat0$glossar <- ifelse(dat0$artwork == "glossar", 1, 0)
|
|
|
|
# Remove irrelevant events
|
|
dat <- subset(dat0, !(dat0$event %in% c("Start Application",
|
|
"Show Application")))
|
|
|
|
summary(dat[dat$artwork %in% c("504", "505"), ])
|
|
```
|
|
|
|
The artworks seem to be have updated in general after October 21, 2022.
|
|
|
|
```{r}
|
|
art_after_oct2022 <- sort(unique(dat[dat$date >= "2022-10-21", "artwork"]))
|
|
art_before_oct2022 <- sort(unique(dat[dat$date <= "2022-10-21", "artwork"]))
|
|
# Removed artworks
|
|
art_before_oct2022[!art_before_oct2022 %in% art_after_oct2022]
|
|
# Additional artworks
|
|
art_after_oct2022[!art_after_oct2022 %in% art_before_oct2022]
|
|
```
|
|
|
|
The following table shows which artworks were presented in which years.
|
|
|
|
```{r}
|
|
xtabs(~ artwork + lubridate::year(date), dat)
|
|
```
|
|
|
|
It strongly suggests that the artworks haven been updated after the Corona
|
|
pandemic. I think, the table was also moved to a different location at that
|
|
point. (Check with PG to make sure.)
|
|
|
|
I need to get the XML files for "504" and "505" from PM in order to extract
|
|
information on them for the metadata.
|
|
|
|
# Optimizing resources used by the code
|
|
|
|
After I started trying out the functions on the complete data set, it
|
|
became obvious (not surprisingly `:)`) that this will not work --
|
|
especially for the move events. The reshape function cannot take a long
|
|
data frame with over 6 Million entries and convert it into a wide data
|
|
frame (at least not on my laptop). The code is supposed to work "out of the
|
|
box" for researchers, hence it *should* run on a regular (8 core) laptop.
|
|
So, I changed the reshaping so that it is done in batches on subsets of the
|
|
data for every `fileId` separately. This means that events that span over
|
|
two raw log files cannot be closed and will then be removed from the data
|
|
set. The functions warns about this, but it is a random process getting rid
|
|
of these data and seems therefore not like a systematic problem. Another
|
|
reason why this is not bad, is that durations cannot be calculated for
|
|
events across log files anyways, because the time stamps do not increase
|
|
systematically over log files (see above).
|
|
|
|
I meant to put the lists back together with `do.call(rbind, some_list)` but
|
|
this can also not handle big data sets. I therefore switched to
|
|
`dplyr::bind_rows(some_ist)` which is really fast and was developed
|
|
especially for this purpose. It means, that I have to depend on the dplyr
|
|
package (which I am not a big fan of, since I meant to keep the package
|
|
self-contained).
|
|
|
|
# Reading list
|
|
|
|
* @Arizmendi2022 [--]
|
|
* @Bannert2014 [x]
|
|
* @Bousbia2010 [--]
|
|
* @Cerezo2020
|
|
* @GerjetsSchwan2021 [x]
|
|
* @Goldhammer2020
|
|
* @Guenther2007
|
|
* @HuberBannert2023 [x]
|
|
* @Kroehne2018
|
|
* @SchwanGerjets2021 [x]
|
|
* @vanderAalst2016 [Chap. 2, x]
|
|
* @vanderAalst2016 [Chap. 3]
|
|
* @vanderAalst2016 [Chap. 5, x]
|
|
* @Wang2019
|
|
|
|
# Open stuff
|
|
|
|
* Angle from which people approach table in Braunschweig? Consider in
|
|
rotation variable?
|
|
* Time limit for `case` variable different for different events? (openTopic
|
|
should be opened the longest)
|
|
|
|
$\to$ I think this is not relevant since I am looking at time *between*
|
|
events!
|
|
|
|
# Stuff AK found interesting
|
|
|
|
* Pre/post corona
|
|
* Identify school classes
|
|
* How many persons are present at the table?
|
|
|
|
# Other potential questions
|
|
|
|
* "Bursts"
|
|
* 1st vs. 2nd half of the day
|
|
* Can we identify "types of art"? With clustering or something?
|
|
* Possible to estimate how many persons per day? Maybe average of certain
|
|
weekdays? ... ?
|
|
|