705 lines
31 KiB
Plaintext
705 lines
31 KiB
Plaintext
---
|
|
title: "Background information about MTT data"
|
|
author: "Nora Wickelmaier"
|
|
date: "`r Sys.Date()`"
|
|
output:
|
|
html_document:
|
|
number_sections: true
|
|
toc: true
|
|
---
|
|
|
|
```{r, include = FALSE}
|
|
# setwd("C:/Users/nwickelmaier/Nextcloud/Documents/MDS/2023ss/60100_master_thesis")
|
|
devtools::load_all("../../../software/mtt")
|
|
```
|
|
|
|
# Log data from the Multi-Touch Table at the HAUM
|
|
|
|
The Multi Touch Table at the Herzog-Anton-Ulrich-Museum (HAUM) in
|
|
Braunschweig gives visitors of the Museum the opportunity to interact with
|
|
67 artworks and 3 tiles containing information about the museum and its
|
|
layout. The table was installed at the institute in October 2016 and since
|
|
November 2016 log files from interactions of visitors of the museum have
|
|
been collected. These log files are in an unstructured format and cannot be
|
|
easily analyzed. The purpose of the following document is to describe how
|
|
the data haven been transformed and which decisions have been made along
|
|
the way.
|
|
|
|
# Data structure
|
|
|
|
The log files contain lines that indicate the beginning and end of possible
|
|
actions that can be performed when interacting with the artworks on the
|
|
table. The layout of the table looks like 70 pictures have been tossed on a
|
|
large table. Every artwork is visible at the start configuration. People
|
|
can move the pictures on the table, they can be scaled and rotated.
|
|
Additionally, the virtual picture cards can be flipped in order to find
|
|
more information of the artwork on the "back" of the card. One has to press
|
|
a little `i` for more information in one of the bottom corners of the card.
|
|
On the back of the card two (?) to six information cards can be found with
|
|
a teaser text about a certain topic. These topic cards can be opened and a
|
|
hypertext with detailed information pops up. Within these hypertexts
|
|
certain technical terms can be clicked for lay people to get more
|
|
information. This also opens up a pop-up. The events encoded in the raw log
|
|
files therefore have the following structure.
|
|
|
|
```
|
|
"Start Application" --> Start Application
|
|
"Show Application"
|
|
"Transform start" --> Move
|
|
"Transform stop"
|
|
"Show Info" --> Flip Card
|
|
"Show Front"
|
|
"Artwork/OpenCard" --> Open Topic
|
|
"Artwork/CloseCard"
|
|
"ShowPopup" --> Open Popup
|
|
"HidePopup"
|
|
```
|
|
|
|
The right side shows what events can be extracted from these raw lines. The
|
|
"Start Application" is not an event in the original sense since it only
|
|
indicates if the table was started or maybe reset itself. This is not an
|
|
interaction with the table and therefore not interesting in itself. All
|
|
"Start Application" and "Show Application" are therefore excluded from the
|
|
data when further processed and are only in the raw log files.
|
|
|
|
# Parsing the raw log files
|
|
|
|
The first step is to parse the raw log files that are stored by the
|
|
application as text files in a rather unstructured format to a format that
|
|
can be read by common statistics software packages. The data are therefore
|
|
transferred to a spread sheet format. The following section describes what
|
|
problems were encountered while doing this.
|
|
|
|
## Corrupt lines
|
|
|
|
When reading the files containing the raw logs into R, a warning appears
|
|
that says
|
|
|
|
```
|
|
Warning messages:
|
|
incomplete final line found on '2016/2016_11_18-11_31_0.log'
|
|
incomplete final line found on '2016/2016_11_18-11_38_30.log'
|
|
incomplete final line found on '2016/2016_11_18-11_40_36.log'
|
|
...
|
|
```
|
|
|
|
When you open these files, it looks like the last line contains some binary
|
|
content. It is unclear why and how this happens. So when reading the data,
|
|
these lines were removed. A warning will be given that indicates how many
|
|
files have been affected.
|
|
|
|
## Extracted variables from raw log files
|
|
|
|
The following variables (columns in the data frame) are extracted from the
|
|
raw log file:
|
|
|
|
* `fileId`: Containing the zero-left-padded file name of the raw log file
|
|
the data line has been extracted from
|
|
|
|
* `folder`: The folder names in which the raw log files haven been
|
|
organized in. For the HAUM data set, the data are sorted by year (folders
|
|
2016, 2017, 2018, 2019, 2020, 2021, 2022, and 2023).
|
|
|
|
* `data`: Extracted time stamp from the raw log file in the format
|
|
`yyyy-mm-dd hh:mm:ss`.
|
|
|
|
* `timeMs`: Containing a time stamp in Milliseconds that restarts with
|
|
every new raw log files.
|
|
|
|
* `event`: Start and stop event tags. See above for possible values.
|
|
|
|
* `artwork`: Identifier of the different artworks. This is a 3 digit
|
|
(left-padded) number. The numbers of the artworks correspond to the
|
|
folder names in `/ContentEyevisit/eyevisit_cards_light/` and were
|
|
orginally taken from the museums catalogue.
|
|
|
|
* `popup`: Name of the pop-up opened. This is only interestin for
|
|
"openPopup" events.
|
|
|
|
* `topicNumber`: The number of the topic card that has been opened at the back of
|
|
the artwork card. See below for a more detailed descripttion what these
|
|
numbers possibly mean.
|
|
|
|
* `x`: Value of x-coordinate in pixel on the 4K-Display ($3840 \times 2160$)
|
|
|
|
* `y`: Value of y-coordinate in pixel
|
|
|
|
* `scale`: Number in 128 bit that indicates how much the artwork card has
|
|
been scaled (????)
|
|
|
|
* `rotation`: Degree of rotation in start configuration.
|
|
|
|
<!-- TODO: Nach welchem Zeitintervall resettet sich der Tisch wieder in die
|
|
Ausgangskonfiguration? -> PM needs to look it up -->
|
|
|
|
## Variables after "closing of events"
|
|
|
|
The raw log data consists of start and stop events for each event type.
|
|
After preprocessing for event types are extracted: `move`, `flipCard`,
|
|
`openTopic`, and `openPopup`. Except for the `move` events, which can occur
|
|
at any time when interacting with an artwork card on the table, the events
|
|
have a hierachical order: An artwork card first needs to be flipped
|
|
(`flipCard`), then the topic cards on the back of the card can be opened
|
|
(`openTopic`), and finally pop-ups on these topic cards can be opened
|
|
(`openPopup`). This implies that the event `openPopup` can only be present
|
|
for a certain artwork, if the card has already been flipped (i.e., an event
|
|
`flipCard` for the same artwork has already occured).
|
|
|
|
After preprocessing, the data frame is now in a wide format with columns
|
|
for the start and the stop of each event and contains the following
|
|
variables:
|
|
|
|
* `folder`: Containing the folder name (see above)
|
|
|
|
* `eventId`: A numerical variable that indicates the number of the event.
|
|
Starts at 1 and ends with the total number of events, counting up by 1.
|
|
|
|
* `case`: A numerical variable indicating cases in the data. A "case"
|
|
indicates an interaction interval and could be defined in different ways.
|
|
Right now a new case begins, when no event occured for 20 seconds.
|
|
|
|
* `trace`: A trace is defined as one interaction with one artwork. A trace
|
|
can either start with a `flipCard` event or when an artwork has been
|
|
touched for the first time within this case. A trace ends with the
|
|
artwork card being flipped close again or with the last movement of the
|
|
card within this case. One case can contain several traces with the same
|
|
artwork when the artwork is flipped open and slipped close again several
|
|
times within a short time.
|
|
|
|
* `glossar`: An indicator variable with values 0/1 that tracks if a pop-up
|
|
has been opened from the glossar folder. These pop-ups can be assigned to
|
|
the wronge artwork since it is not possible to do this algorithmically.
|
|
It is possible that two artworks are flipped open that could both link to
|
|
the same popup from a glossar. The indicator variable is left as a
|
|
variable, so that these pop-ups can be easily deleted from the data.
|
|
Right now, glossar entries can be ignored completely by setting an
|
|
argument and this is done by default. Using the pop-ups from the glossar
|
|
will need a lot more love, before it behaves satisfactorily.
|
|
|
|
* `event`: Indicating the event. Can take tha values `move`, `flipCard`,
|
|
`openTopic`, and `openPopup`.
|
|
|
|
* `artwork`: Identifier of the different artworks. This is a 3 digit
|
|
(left-padded) number. See above.
|
|
|
|
* `fileId.start` / `fileId.stop`: See above.
|
|
|
|
* `date.start` / `date.stop`: See above.
|
|
|
|
* `timeMs.start` / `timeMs.stop`: See above.
|
|
|
|
* `duration`: Calculated by $timeMs.stop - timeMs.start$ in Milliseconds.
|
|
Needs to be adjusted for events spanning more than one log file by a
|
|
factor of $60,000 \times #logfiles$. See below for details.
|
|
|
|
* `topicNumber`: See above.
|
|
|
|
* `popup`: See above.
|
|
|
|
* `x.start` / `x.stop`: See above.
|
|
|
|
* `y.start` / `y.stop`: See above.
|
|
|
|
* `distance`: Euclidean distande calculated from $(x.start, y.start)$ and $(x.stop, y.stop)$.
|
|
|
|
* `scale.start` / `scale.stop`: See above.
|
|
|
|
* `scaleSize`: Relative scaling of artwork card, calculated by
|
|
$\frac{scale.stop}{scale.start}$.
|
|
|
|
* `rotation.start` / `rotation.stop`: See above.
|
|
|
|
* `rotationDegree`: Difference of rotation from $rotation.stop$ to
|
|
$rotation.start$.
|
|
|
|
## How unclosed events are handled
|
|
|
|
Events do not necessarily need to be completed. A person can, e.g., leave
|
|
the table and not flip the artwork card close again. For `flipCard`,
|
|
`openTopic`, and `openPopup` the data frame contains `NA` when the event
|
|
does not complete. For `move` events is happens quite often that a start
|
|
event follows a start event and a stop event follows a stop event.
|
|
Technically a move event cannot *not* be finished and the number of events
|
|
without a start or stop indicated that the time resolution was not
|
|
sufficient to catch all these events accurately. Double start and stop
|
|
`move`events have therefore been deleted from the data set.
|
|
|
|
<!--
|
|
## How a case is defined
|
|
|
|
* Herausfinden, ob mehr als eine Person am Tisch steht?
|
|
- Sliding window, in der Anzahl von Artworks gezählt wird? Oder wie weit
|
|
angefasste Artworks voneinander entfernt sind?
|
|
- Man kann sowas schon "sehen" in den Logs - aber wie kann ich es
|
|
automatisiert rausziehen? Was ist meine Definition von
|
|
"Interaktionsboost"?
|
|
- Egal wie wir es machen, geht es auf den "Event-Log-Daten"?
|
|
-->
|
|
|
|
## Additional meta data
|
|
|
|
For the HAUM data, I added meta data on state holidays and school
|
|
vacations. Additionally, the topic categories of the topic cards were
|
|
extracted from the XML files and added to the data frame.
|
|
|
|
This led to the following additional variables:
|
|
|
|
* `topicIndex`
|
|
|
|
* `topicFile`
|
|
|
|
* `topic`
|
|
|
|
* `state` (Niedersachsen for complete HAUM data set)
|
|
|
|
* `stateCode` (NI)
|
|
|
|
* `holiday`
|
|
|
|
* `vacations`
|
|
|
|
* `stateCodeVacations`
|
|
|
|
<!--
|
|
- Metadata on artworks like, name, artist, type of artwork, epoch, etc.
|
|
- School vacations and holidays
|
|
- Special exhibits at the museum
|
|
- Number of visitors per day (bei Sven noch mal nachhaken?)
|
|
- Age structure of visitors per day?
|
|
- ... ????
|
|
-->
|
|
|
|
# Problems and how I handled them
|
|
|
|
This lists some problems with the log data that required decisions. These
|
|
decisions influence the outcome and maybe even the data quality. Hence, I
|
|
tried to document how I handled these problems and explain the decisions I
|
|
made.
|
|
|
|
## Weird behavior of `timeMs` and neg. `duration` values
|
|
|
|
`timeMs` resets itself every time a new log file starts. This means that
|
|
the durations of events spanning more than one log file must be adjusted.
|
|
Instead of just calculating $timeMs.stop - timeMs.start$, `timeMs.start`
|
|
must be subtracted from the maximum duration of the log file where the
|
|
event started ($600,000 ms$) and the `timeMs.stop` must be added. If the
|
|
event spans more than two log files, a multiple of $600,000$ must be taken,
|
|
e.g. for three log files it must be: $2 \times 600,000 - timeMs.start +
|
|
timeMs.stop$ and so on.
|
|
|
|
```{r, results = FALSE, fig.show = TRUE}
|
|
# Read data
|
|
dat0 <- read.table("data/haum/raw_logfiles_small_2023-09-26_13-50-20.csv", sep = ";",
|
|
header = TRUE)
|
|
dat0$date <- as.POSIXct(dat0$date)
|
|
dat0$glossar <- ifelse(dat0$artwork == "glossar", 1, 0)
|
|
|
|
# Remove irrelevant events
|
|
dat <- subset(dat0, !(dat0$event %in% c("Start Application",
|
|
"Show Application")))
|
|
|
|
# Add trace variable
|
|
artworks <- unique(stats::na.omit(dat$artwork))
|
|
artworks <- artworks[artworks != "glossar"]
|
|
glossar_files <- unique(subset(dat, dat$artwork == "glossar")$popup)
|
|
glossar_dict <- create_glossardict(artworks, glossar_files,
|
|
xmlpath = "data/haum/ContentEyevisit/eyevisit_cards_light/")
|
|
dat1 <- add_trace(dat, glossar_dict)
|
|
|
|
# Close events
|
|
dat2 <- rbind(close_events(dat1, "move", rm_nochange_moves = TRUE),
|
|
close_events(dat1, "flipCard", rm_nochange_moves = TRUE),
|
|
close_events(dat1, "openTopic", rm_nochange_moves = TRUE),
|
|
close_events(dat1, "openPopup", rm_nochange_moves = TRUE))
|
|
dat2 <- dat2[order(dat2$fileId.start, dat2$date.start, dat2$timeMs.start), ]
|
|
|
|
plot(timeMs ~ as.factor(fileId), dat[1:5000,], xlab = "fileId")
|
|
```
|
|
|
|
The boxplot shows that we have a continuous range of values within one log
|
|
file but that `timeMs` does not increase over log files. I kept
|
|
`timeMs.start` and `timeMs.stop` and also `fileId.start` and `fileId.stop`
|
|
in the data frame, so it is clear when events span more than one log file.
|
|
|
|
<!--
|
|
Infos from Philipp:
|
|
|
|
"Bin außerdem gerade den Code von damals durchgegangen. Das Logging läuft
|
|
so: Mit Start der Anwendung wird alle 10 Minuten ein neues Logfile
|
|
erstellt. Die Startzeit, von der aus die Duration berechnet wird, wird
|
|
jeweils neu gesetzt. Duration ist also nicht "Dauer seit Start der
|
|
Anwendung" sondern "Dauer seit Restart des Loggers". Deine Vermutung ist
|
|
also richtig - es sollte keine Durations >10 Minuten geben. Der erste
|
|
Eintrag eines Logfiles kann alles zwischen 0 und 10 Minuten sein (je
|
|
nachdem, ob der Tisch zum Zeitpunkt des neuen Logging-Intervalls in
|
|
Benutzung war). Wenn ein Case also über 2+ Logs verteilt ist, musst du auf
|
|
die Duration jeweils 10 Minuten pro Logfile nach dem ersten addieren, damit
|
|
es passt."
|
|
-->
|
|
|
|
## Left padding of file IDs
|
|
|
|
The file names of the raw log files are automatically generated and contain
|
|
a time stamp. This time stamp is not well formed. First, it contains an
|
|
incorrect month. The months go from 0 to 11 which means, that the file name
|
|
`2016_11_15-12_12_57.log` was collected on December 15, 2016 at 12:12 pm.
|
|
Another problem is that the file names are not zero left padded, e.g.,
|
|
`2016_11_15-12_2_57.log`. This file was collected on December 15, 2016 at
|
|
12:02 pm and therefore before the file above. But most sorting algorithms,
|
|
will sort these files in the order shown below. In order to preprocess the
|
|
data and close events that belong together, the data need to be sorted by
|
|
events and artworks repeatedly. In order to get them back in the correct
|
|
time order, it is necessary to order them based on three variables:
|
|
`fileId`, `date.start` and `timeMs`. The file IDs therefore need to
|
|
sort in the correct order (again see below for example). I zero left padded
|
|
the log file names within the data frame using it as an identifier. These
|
|
"file names" do not correspond exactly to the original raw log file names.
|
|
This needs to be kept in mind when doing any kind of matching etc.
|
|
|
|
```
|
|
## what it looked like before left padding
|
|
# 1422 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_2_57.log 2016-12-15 12:12:56 599671 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26874254
|
|
# 1423 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 621 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26523465
|
|
# 1424 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 677 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997736 13.26239605
|
|
# 1425 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 774 Transform start 076 076.xml NA 2092.25 2008.00 0.2999345 13.26239605
|
|
# 1426 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_12_57.log 2016-12-15 12:12:57 850 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997107 13.26223362
|
|
# 1427 ../data/haum_logs_2016-2023/_2016b/2016_11_15-12_2_57.log 2016-12-15 12:12:57 599916 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997771 13.26523465
|
|
|
|
## what it looks like now
|
|
# 1422 2016_11_15-12_02_57.log 2016-12-15 12:12:56 599671 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26874254
|
|
# 1423 2016_11_15-12_02_57.log 2016-12-15 12:12:57 599916 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997771 13.26523465
|
|
# 1424 2016_11_15-12_12_57.log 2016-12-15 12:12:57 621 Transform start 076 076.xml NA 2092.25 2008.00 0.3000000 13.26523465
|
|
# 1425 2016_11_15-12_12_57.log 2016-12-15 12:12:57 677 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997736 13.26239605
|
|
# 1426 2016_11_15-12_12_57.log 2016-12-15 12:12:57 774 Transform start 076 076.xml NA 2092.25 2008.00 0.2999345 13.26239605
|
|
# 1427 2016_11_15-12_12_57.log 2016-12-15 12:12:57 850 Transform stop 076 076.xml NA 2092.25 2008.00 0.2997107 13.26223362
|
|
```
|
|
|
|
## Timestamps repeat
|
|
|
|
The time stamps in the `date` variable record year, month, day, hour,
|
|
minute and seconds. Since one second is not a very short time interval for
|
|
a move on a touch display, this is not fine grained enough to bring events
|
|
into the correct order, meaning there are events from the same log file
|
|
having the same time stamp and even events from different log files having
|
|
the same time stamp. The log files get written about every 10 minutes
|
|
(which can easily be seen when looking at the file names of the raw log
|
|
files). So in order to get events in the correct order, it is necessary to
|
|
first order by file ID, within file ID then sort by time stamp `date` and
|
|
then within these more coarse grained time stamps sort be `timeMs`. But as
|
|
explained above, `timeMs` can only be sorted within one file ID, since they
|
|
do not increase consistently over log files, but have a new setoff for each
|
|
raw log file.
|
|
|
|
## x,y-coordinates outside of display range
|
|
|
|
The display of the Multi-Touch-Table is a 4K-display with 3840 x 2160
|
|
pixels. When you plot the start and stop coordinates, the display is
|
|
clearly to distinguish. However, a lot of points are outside of the display
|
|
range. This can happen, when the art objects are scaled and then moved to
|
|
the very edge of the table. Then it will record pixels outside of the
|
|
table. These are actually valid data points and I will leave them as is.
|
|
|
|
```{r}
|
|
par(mfrow = c(1, 2))
|
|
plot(y.start ~ x.start, dat2)
|
|
abline(v = c(0, 3840), h = c(0, 2160), col = "blue", lwd = 2)
|
|
plot(y.stop ~ x.stop, dat2)
|
|
abline(v = c(0, 3840), h = c(0, 2160), col = "blue", lwd = 2)
|
|
|
|
aggregate(cbind(x.start, x.stop, y.start, y.stop) ~ 1, dat2, mean)
|
|
```
|
|
|
|
## Pop-ups from glossar cannot be assigned to a specific artwork
|
|
|
|
All the information, pictures and texts for the topics and pop-ups are
|
|
stored in
|
|
`/Logfiles/ContentEyevisit/eyevisit_cards_light/<artwork_number>`. Among
|
|
other things, each folder contains XML-files with the information about any
|
|
technical terms that can be opened from the hypertexts on the topic cards.
|
|
Often these information are artwork dependent and then the corresponding
|
|
XML-file is in the folder for this artwork. Sometimes, however, more
|
|
general terms can be opened. In order to avoid multiple files containing
|
|
the same information, these were stored in a folder called `glossar` and
|
|
get accessed from there. The raw log files only contain the path to this
|
|
glossar entry and did not record from which artwork it was accessed. I
|
|
tried to assign these glossar entries to the correct artworks. The (very
|
|
heuristic) approach was this:
|
|
|
|
1. Create a lookup table with all XML-file names (possible pop-ups) from
|
|
the glossar folder and what artworks possibly call them. This was stored
|
|
as an `RData` object for easier handling but should maybe be stored in a
|
|
more interoperable format.
|
|
|
|
2. I went through all possible pop-ups in this lookup table and stored the
|
|
artworks that are associated with it.
|
|
|
|
3. I created a sub data frame without move events (since they can never be
|
|
associated with a pop-up) and went through every line and looked up if
|
|
an artwork and a topic card had been opened. If this was the case and a
|
|
glossar entry came up before the artwork was closed again, I assigned
|
|
this artwork to this glossar entry.
|
|
|
|
This is heuristic since it is possible that several topic cards from
|
|
different artworks are opened simultaneously and the glossar pop-up could
|
|
be opened from either one (it could even be more than two, of course). In
|
|
these cases the artwork that was opened closest to the glossar pop-up has
|
|
been assigned, but this can never be completely error free.
|
|
|
|
And this heuristic only assigns a little more than half of the glossar
|
|
entries. Since my heuristic only looks for the last artwork that has been
|
|
opened and if this artwork is a possible candidate it misses all glossar
|
|
pop-ups where another artwork has been opened in between. This is still an
|
|
open TODO to write a more elaborate algorithm.
|
|
|
|
All glossar pop-ups that do not get matched with an artwork are removed
|
|
from the data set with a warning if the argument `glossar = TRUE` is set.
|
|
Otherwise the glossar entries will be ignored completely.
|
|
|
|
## Assign a `case` variable based on "time heuristic"
|
|
|
|
One thing needed in order to work with the data set and use it for machine
|
|
learning algorithms like process mining, is a variable that tries to
|
|
identify a case. A case variable will structure the data frame in a way
|
|
that navigation behavior can actually be investigated. However, we do not
|
|
know if several people are standing around the table interacting with it or
|
|
just one very active person. The simplest way to define a case variable is
|
|
to just use a time limit between events. This means that when the table has
|
|
not been interacted with for, e.g., 20 seconds than it is assumed that a
|
|
person moved on and a new person started interacting with the table. This
|
|
is the easiest heuristic and implemented at the moment. Process mining
|
|
shows that this simple approach works in a way that the correct process
|
|
gets extracted by the algorithm.
|
|
|
|
In order to investigate user behavior on a more fine grained level, it will
|
|
be necessary to come up with a more elaborate approach. A better, still
|
|
simple approach, could be to use this kind of time limit and additionally
|
|
look at the distance between artworks interacted with within one time
|
|
window. When artworks are far apart it seems plausible that more than one
|
|
person interacted with them. Very short time lapses between events on
|
|
different artworks could also be an indicator that more than one person is
|
|
interacting with the table.
|
|
|
|
## Assign a `trace` variable
|
|
|
|
The `trace` variable is supposed to show one interaction trace with one
|
|
artwork. Meaning it starts when an artwork is touched or flipped and stops
|
|
when it is closed again. It is easy to assign a trace from flipping a card
|
|
over opening (maybe several) topics and pop-ups for this artwork card until
|
|
closing this card again. But one would like to assign the same trace to
|
|
move events surrounding this interaction. Again, this is not possible in an
|
|
algorithmic way but only heuristically. I used the `case` variable in order
|
|
to get meaningful units around the artworks.
|
|
|
|
If within one case only a single trace for a single artwork was opened, I
|
|
assigned this trace to the moves associated with this artwork. It (quite
|
|
often) happens that within one case one artwork is opened and closed
|
|
several times, each time starting a new trace. I then assigned all the
|
|
following move events to the trace beforehand. This is, of course,
|
|
arbitrary and could also be handled the other way around.
|
|
|
|
Another possibility is, that an artwork gets moved within one trace without
|
|
being flipped. I then assigned a new trace to this move.
|
|
|
|
This overall worked very well even though it was based on the very
|
|
heuristic approach assigning a case when the table has not been touched for
|
|
20 seconds. It should be kept in mind that the trace assignments for the
|
|
moves will change when case is defined in a different way.
|
|
|
|
## A `move` event does not record any change
|
|
|
|
Most of the events in the log files are move events. Additionally, many of
|
|
these move events are recorded but they do not indicate any change meaning
|
|
the only difference is the time stamp. All other variables indicating moves
|
|
like `x.start` and `x.stop`, `rotation.start` and `rotation.stop` etc. do
|
|
not show any change. They represent about 2/3 of all move events. These
|
|
events are probably short touches of the table without an actual
|
|
interaction. They were therefore removed from the data set.
|
|
|
|
## Events that only close (`date.start` is NA)
|
|
|
|
It looks like there is some kind of log error for the events that do not
|
|
have a start stop. I was able to get rid of most by sorting for `popup` for
|
|
the openPopup events, but there are still some left (50 for the small data
|
|
set, which corresponds to 0.2 per mill). The following example shows that
|
|
artwork "501" gets closed (line 31030) while the pop-up `sommerbau.xml`
|
|
is still opened (line 31027). Then artwork "501" gets opened again
|
|
(line 31035) and after that the pop-up `sommerbau.xml` is closed (line
|
|
31040). This should not be possible and therefore (correctly) two events
|
|
are assigned: One where the pop-up was opened and then not closed (which is
|
|
common) and another one where the pop-up has no start.
|
|
|
|
```{r}
|
|
dat[31000:31019,]
|
|
# Card gets flipped closed before pop-up closes --> log error!
|
|
```
|
|
|
|
I did not check all of these cases (for the complete data set this is
|
|
simply not possible by hand) but just excluded all events that do not have
|
|
a `date.start` since they are hard to interpret. Often they are log errors
|
|
but in some cases they might be resolvable.
|
|
|
|
```{r}
|
|
# remove all events that do not have a `date.start`
|
|
dim(dat2[is.na(dat2$date.start), ])
|
|
dat2 <- dat2[!is.na(dat2$date.start), ]
|
|
```
|
|
|
|
In order to deal with these logging errors, I check the data for what I
|
|
call "fragmented traces". These are traces that cannot happen, when
|
|
everything is logged correctly, e.g., traces containing `flipCard ->
|
|
openPopup` or traces that only consist of `move`, `openTopic`, and
|
|
`openPopup` events. These fragmented traces are removed from the data. It
|
|
was not possible to check them all manually, but the 20 or more that I do
|
|
check in the raw log files were all some kind of logging error like above.
|
|
Most often a card was already closed again, before a topic card or pop-up
|
|
was recorded as being closed.
|
|
|
|
## Card indices go from 0 to 7 (instead of 0 to 5 as expected)
|
|
|
|
See `questions_number-of-cards.R` for more details.
|
|
|
|
I wrote a function that for each artwork extracts the file names of the
|
|
possible topic cards and then looks up which topics have actually been
|
|
displayed on the back of the card. I added an index giving the ordering in
|
|
the index files.
|
|
|
|
The possible values in the variable `topicNumber` range from 0 to 7,
|
|
however, no artwork has more than six different numbers. So I just renamed
|
|
those numbers from 1 to the highest number, e.g., $0,1,2,4,5,6$ was changed
|
|
to $0\to 1,1\to 2,2\to 3,4\to 4,5\to 5,6\to 6$. Next I used the index to
|
|
assign topics and file names to the according pop-ups. This needs to be
|
|
cross checked with the programming, but seems the most plausible approach
|
|
with my current knowledge.
|
|
|
|
<!-- TODO: Ask Philipp -->
|
|
|
|
## Extracting topics from `index.xml` vs. `<artwork_number>.xml`
|
|
|
|
When I extract the topics from `index.html` I get different topics, than
|
|
when I get them from `<artwork>.html`. At first glance, it looks like using
|
|
`index.html` actually gives the wrong results.
|
|
|
|
```{r}
|
|
artworks <- unique(dat2$artwork)
|
|
path <- "data/haum/ContentEyevisit/eyevisit_cards_light/"
|
|
topics <- extract_topics(artworks, rep("index.xml", length(artworks)), path)
|
|
topics2 <- extract_topics(artworks, paste0(artworks, ".xml"), path)
|
|
|
|
topics[!topics$file_name %in% topics2$file_name, ]
|
|
topics2[!topics2$file_name %in% topics$file_name, ]
|
|
```
|
|
|
|
For artwork "031", `index.html` only defines 5 cards (the 6th is commented
|
|
out), but `topicNumber` for this artwork has 6 different entries. I will
|
|
therefore extract the topics from `<artwork>.html`. (This seems also better
|
|
compatible with other data sets like 8o8m.)
|
|
|
|
## New artworks "504" and "505" starting October 2022
|
|
|
|
When I read in the complete data frame for the first time, all of the
|
|
sudden there were 72 instead of 70 artworks. It seems like these two
|
|
artworks appear on October 21, 2022.
|
|
|
|
```{r}
|
|
dat0 <- read.table("data/haum/raw_logfiles_2023-09-23_01-31-30.csv",
|
|
sep = ";", header = TRUE)
|
|
dat0$date <- as.POSIXct(dat0$date)
|
|
dat0$glossar <- ifelse(dat0$artwork == "glossar", 1, 0)
|
|
|
|
# Remove irrelevant events
|
|
dat <- subset(dat0, !(dat0$event %in% c("Start Application",
|
|
"Show Application")))
|
|
|
|
summary(dat[dat$artwork %in% c("504", "505"), ])
|
|
```
|
|
|
|
The artworks seem to be have updated in general after October 21, 2022.
|
|
|
|
```{r}
|
|
art_after_oct2022 <- sort(unique(dat[dat$date >= "2022-10-21", "artwork"]))
|
|
art_before_oct2022 <- sort(unique(dat[dat$date <= "2022-10-21", "artwork"]))
|
|
# Removed artworks
|
|
art_before_oct2022[!art_before_oct2022 %in% art_after_oct2022]
|
|
# Additional artworks
|
|
art_after_oct2022[!art_after_oct2022 %in% art_before_oct2022]
|
|
```
|
|
|
|
The following table shows which artworks were presented in which years.
|
|
|
|
```{r}
|
|
xtabs(~ artwork + lubridate::year(date), dat)
|
|
```
|
|
|
|
It strongly suggests that the artworks haven been updated after the Corona
|
|
pandemic. I think, the table was also moved to a different location at that
|
|
point. (Check with PG to make sure.)
|
|
|
|
# Optimizing resources used by the code
|
|
|
|
After I started trying out the functions on the complete data set, it
|
|
became obvious (not surprisingly `:)`) that this will not work --
|
|
especially for the move events. The reshape function cannot take a long
|
|
data frame with over 6 Million entries and convert it into a wide data
|
|
frame (at least not on my laptop). The code is supposed to work "out of the
|
|
box" for researchers, hence it *should* run on a regular (8 core) laptop.
|
|
So, I changed the reshaping so that it is done in batches on subsets of the
|
|
data for every `fileId` separately. This means that events that span over
|
|
two (or more) raw log files cannot be closed and will then be removed from
|
|
the data set. The function warns about this, but it is a random process
|
|
getting rid of these data and seems therefore not like a systematic
|
|
problem. Another reason why this is not bad, is that durations cannot be
|
|
calculated for events across log files anyways, because the time stamps do
|
|
not increase systematically over log files (see above).
|
|
|
|
UPDATE: By now, I close the events spanning more than one log file after
|
|
this has been done.
|
|
|
|
I meant to put the lists back together with `do.call(rbind, some_list)` but
|
|
this can also not handle big data sets. I therefore switched to
|
|
`dplyr::bind_rows(some_ist)` which is really fast and was developed
|
|
especially for this purpose. It means, that I have to depend on the dplyr
|
|
package (which I am not a big fan of, since I meant to keep the package
|
|
self-contained).
|
|
|
|
# Reading list
|
|
|
|
* @Arizmendi2022 [--]
|
|
* @Bannert2014 [x]
|
|
* @Bousbia2010 [--]
|
|
* @Cerezo2020
|
|
* @GerjetsSchwan2021 [x]
|
|
* @Goldhammer2020
|
|
* @Guenther2007
|
|
* @HuberBannert2023 [x]
|
|
* @Kroehne2018
|
|
* @SchwanGerjets2021 [x]
|
|
* @vanderAalst2016 [Chap. 2, x]
|
|
* @vanderAalst2016 [Chap. 3]
|
|
* @vanderAalst2016 [Chap. 5, x]
|
|
* @Wang2019
|
|
|
|
# Open stuff
|
|
|
|
* Angle from which people approach table in Braunschweig? Consider in
|
|
rotation variable?
|
|
* Time limit for `case` variable different for different events? (openTopic
|
|
should be opened the longest)
|
|
|
|
$\to$ I think this is not relevant since I am looking at time *between*
|
|
events!
|
|
|
|
# Stuff AK found interesting
|
|
|
|
* Pre/post corona
|
|
* Identify school classes
|
|
* How many persons are present at the table?
|
|
|
|
# Other potential questions
|
|
|
|
* "Bursts"
|
|
* 1st vs. 2nd half of the day
|
|
* Can we identify "types of art"? With clustering or something?
|
|
* Possible to estimate how many persons per day? Maybe average of certain
|
|
weekdays? ... ?
|
|
|