Submission scientific data
This commit is contained in:
commit
b53e63f57e
10
.gitignore
vendored
Normal file
10
.gitignore
vendored
Normal file
@ -0,0 +1,10 @@
|
||||
.idea/
|
||||
venv/
|
||||
.venv/
|
||||
|
||||
data/
|
||||
|
||||
settings.yaml
|
||||
.sqlite
|
||||
|
||||
.log
|
||||
22
.pre-commit-config.yaml
Normal file
22
.pre-commit-config.yaml
Normal file
@ -0,0 +1,22 @@
|
||||
repos:
|
||||
- repo: https://github.com/psf/black
|
||||
rev: stable
|
||||
hooks:
|
||||
- id: black
|
||||
- repo: https://github.com/charliermarsh/ruff-pre-commit
|
||||
rev: v0.1.15
|
||||
hooks:
|
||||
- id: ruff
|
||||
- repo: https://github.com/pre-commit/mirrors-mypy
|
||||
rev: v1.8.0
|
||||
hooks:
|
||||
- id: mypy
|
||||
additional_dependencies: [ types-PyYAML ]
|
||||
- repo: local
|
||||
hooks:
|
||||
- id: run-pytest
|
||||
name: Run pytest
|
||||
entry: pytest tests/ -v
|
||||
language: system
|
||||
pass_filenames: false
|
||||
always_run: true
|
||||
130
CONTRIBUTING.md
Normal file
130
CONTRIBUTING.md
Normal file
@ -0,0 +1,130 @@
|
||||
# Contributing to preprocessing
|
||||
|
||||
Thank you for considering contributing to `preprocessing`!
|
||||
Your input helps make this package robust, reliable, and extensible for the community.
|
||||
|
||||
Please follow these guidelines to ensure a smooth and constructive contribution process.
|
||||
|
||||
## How to contribute
|
||||
|
||||
### 1. Clone your fork
|
||||
Clone the repository from your own GitHub account to your local machine:
|
||||
|
||||
```bash
|
||||
git clone https://gitea.iwm-tuebingen.de/AG4/preprocessing.git
|
||||
cd preprocessing
|
||||
```
|
||||
|
||||
### 2. Create a feature branch
|
||||
Create a new branch for the changes you intend to make. This keeps your modifications separate from the `main` branch.
|
||||
|
||||
```bash
|
||||
git checkout -b feature/your-feature-name
|
||||
```
|
||||
|
||||
Please ensure that your branch name is descriptive of the changes you are making, such as `feature/calculation-of-subscales` or `bugfix/fix-data-validation`.
|
||||
|
||||
### 3. Make your changes
|
||||
- Implement your changes in the appropriate files inside `src/`.
|
||||
- Update or create tests under `tests/` as needed.
|
||||
|
||||
### 4. Format and lint your code
|
||||
Ensure that your code is correctly formatted and adheres to project style guidelines.
|
||||
This helps maintain code consistency across the repository.
|
||||
|
||||
Note that `black` for formatting and `ruff` for linting and the `pytest` tests should all be run via a pre-commit hook.
|
||||
In order to do this, you need to set `pre-commit` up in your local repository
|
||||
(it should be installed via the dev dependencies already).
|
||||
|
||||
To set up `pre-commit`, run the following command in your terminal:
|
||||
|
||||
```bash
|
||||
pre-commit install
|
||||
```
|
||||
|
||||
If you need to run it for existing code, you can do so running
|
||||
|
||||
```bash
|
||||
pre-commit run --all-files
|
||||
```
|
||||
|
||||
This will automatically format and lint your code according to the project's standards.
|
||||
|
||||
For manual checks and formatting:
|
||||
|
||||
Format the code using `black`:
|
||||
|
||||
```bash
|
||||
black .
|
||||
ruff check
|
||||
ruff check --fix
|
||||
ruff format
|
||||
```
|
||||
|
||||
These tools are included in the [project.optional-dependencies.dev] section of pyproject.toml.
|
||||
To install them for development, use:
|
||||
|
||||
```bash
|
||||
pip install -e .[dev]
|
||||
```
|
||||
|
||||
### 5. Run the tests
|
||||
Ensure that all tests pass before submitting your changes. We use pytest for testing.
|
||||
|
||||
```bash
|
||||
pytest
|
||||
```
|
||||
|
||||
If any tests fail, fix the issues before proceeding.
|
||||
|
||||
### 6. Commit your changes
|
||||
Once you are satisfied with your changes, commit them to your feature branch:
|
||||
|
||||
```bash
|
||||
git add .
|
||||
git commit -m "Add your commit message here"
|
||||
```
|
||||
Make sure to write a clear and concise commit message that describes the changes you made.
|
||||
|
||||
### 7. Push your changes
|
||||
|
||||
Push your changes to your branch:
|
||||
|
||||
```bash
|
||||
git push origin feature/your-feature-name
|
||||
```
|
||||
|
||||
### 8. Create a Pull Request (PR)
|
||||
Navigate to the [main repository](https://gitea.iwm-tuebingen.de/HMC/preprocessing)
|
||||
and open a pull request (PR) to the `main` branch.
|
||||
The PR should describe the changes you made and why they are useful.
|
||||
- Be sure to include a clear, concise description of your changes in the PR.
|
||||
- If applicable, link to relevant issues in the PR description.
|
||||
|
||||
### 9. Review Process
|
||||
|
||||
After submitting your PR, a maintainer will review your changes. The review process may involve:
|
||||
- Asking for changes or clarifications.
|
||||
- Reviewing code style, formatting, and test coverage.
|
||||
- Discussing the approach or implementation.
|
||||
|
||||
Once your PR is approved, it will be merged into the main branch.
|
||||
|
||||
## Pull Request Guidelines
|
||||
- Include tests: Ensure that your changes come with appropriate test coverage.
|
||||
- Follow coding standards: Follow the code style and formatting guidelines outlined in the project.
|
||||
- Single feature per PR: Each PR should address a single feature or bug fix.
|
||||
- Small, focused PRs: Keep your PRs small and focused to make reviewing easier.
|
||||
|
||||
## Reporting Bugs
|
||||
If you find a bug in the software:
|
||||
- [Search existing issues](https://gitea.iwm-tuebingen.de/HMC/preprocessing/issues): Before opening a new bug report, check if the issue has already been reported.
|
||||
- Open a new issue: If the issue hasn't been reported yet, open a new issue with the following information:
|
||||
- A clear description of the problem.
|
||||
- Steps to reproduce the issue.
|
||||
- Expected behavior vs. actual behavior.
|
||||
- Any error messages or logs.
|
||||
|
||||
## License
|
||||
|
||||
By contributing, you agree that your contributions will be licensed under the same license as the project.
|
||||
54
HMC_preprocessing.py
Normal file
54
HMC_preprocessing.py
Normal file
@ -0,0 +1,54 @@
|
||||
# HMC_preprocessing.py
|
||||
from src.utils.database_documentation_generator import generate_db_api_reference
|
||||
from src.utils.settings_loader import load_settings
|
||||
from src.utils.data_loader import DataLoader
|
||||
from src.process_all_waves import DataPreprocessingAllWaves
|
||||
from src.utils.database_populator import populate_database
|
||||
from src.utils.logging_config import setup_logging
|
||||
import logging
|
||||
|
||||
|
||||
def main():
|
||||
setup_logging()
|
||||
logger = logging.getLogger("preprocessing")
|
||||
|
||||
try:
|
||||
logger.info("Starting data preprocessing pipeline.")
|
||||
|
||||
settings = load_settings("settings.yaml")
|
||||
|
||||
data_loader = DataLoader(settings)
|
||||
data_all_waves = data_loader.load_all_survey_data()
|
||||
|
||||
data_preprocessor = DataPreprocessingAllWaves(data_all_waves, settings, logger)
|
||||
|
||||
preprocessed_data_all_waves = data_preprocessor.preprocess_data()
|
||||
|
||||
generate_db_api_reference(
|
||||
settings, logger, cronbachs_alphas=data_preprocessor.cronbachs_alphas
|
||||
)
|
||||
|
||||
output_settings: dict = settings.get("output", {})
|
||||
populate_database(
|
||||
preprocessed_data_all_waves,
|
||||
database_path=output_settings.get(
|
||||
"database_path", "results/study_results.sqlite"
|
||||
),
|
||||
export_csv=output_settings.get("export_csv", False),
|
||||
export_excel=output_settings.get("export_excel", False),
|
||||
csv_output_directory=output_settings.get("csv_output_directory", "results"),
|
||||
excel_output_directory=output_settings.get(
|
||||
"excel_output_directory", "results"
|
||||
),
|
||||
)
|
||||
|
||||
logger.info(
|
||||
"Data preprocessing and database population completed successfully."
|
||||
)
|
||||
|
||||
except Exception as e:
|
||||
logger.exception(f"An error occurred in the preprocessing pipeline: {e}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
674
LICENSE
Normal file
674
LICENSE
Normal file
@ -0,0 +1,674 @@
|
||||
GNU GENERAL PUBLIC LICENSE
|
||||
Version 3, 29 June 2007
|
||||
|
||||
Copyright (C) 2007 Free Software Foundation, Inc. <http://fsf.org/>
|
||||
Everyone is permitted to copy and distribute verbatim copies
|
||||
of this license document, but changing it is not allowed.
|
||||
|
||||
Preamble
|
||||
|
||||
The GNU General Public License is a free, copyleft license for
|
||||
software and other kinds of works.
|
||||
|
||||
The licenses for most software and other practical works are designed
|
||||
to take away your freedom to share and change the works. By contrast,
|
||||
the GNU General Public License is intended to guarantee your freedom to
|
||||
share and change all versions of a program--to make sure it remains free
|
||||
software for all its users. We, the Free Software Foundation, use the
|
||||
GNU General Public License for most of our software; it applies also to
|
||||
any other work released this way by its authors. You can apply it to
|
||||
your programs, too.
|
||||
|
||||
When we speak of free software, we are referring to freedom, not
|
||||
price. Our General Public Licenses are designed to make sure that you
|
||||
have the freedom to distribute copies of free software (and charge for
|
||||
them if you wish), that you receive source code or can get it if you
|
||||
want it, that you can change the software or use pieces of it in new
|
||||
free programs, and that you know you can do these things.
|
||||
|
||||
To protect your rights, we need to prevent others from denying you
|
||||
these rights or asking you to surrender the rights. Therefore, you have
|
||||
certain responsibilities if you distribute copies of the software, or if
|
||||
you modify it: responsibilities to respect the freedom of others.
|
||||
|
||||
For example, if you distribute copies of such a program, whether
|
||||
gratis or for a fee, you must pass on to the recipients the same
|
||||
freedoms that you received. You must make sure that they, too, receive
|
||||
or can get the source code. And you must show them these terms so they
|
||||
know their rights.
|
||||
|
||||
Developers that use the GNU GPL protect your rights with two steps:
|
||||
(1) assert copyright on the software, and (2) offer you this License
|
||||
giving you legal permission to copy, distribute and/or modify it.
|
||||
|
||||
For the developers' and authors' protection, the GPL clearly explains
|
||||
that there is no warranty for this free software. For both users' and
|
||||
authors' sake, the GPL requires that modified versions be marked as
|
||||
changed, so that their problems will not be attributed erroneously to
|
||||
authors of previous versions.
|
||||
|
||||
Some devices are designed to deny users access to install or run
|
||||
modified versions of the software inside them, although the manufacturer
|
||||
can do so. This is fundamentally incompatible with the aim of
|
||||
protecting users' freedom to change the software. The systematic
|
||||
pattern of such abuse occurs in the area of products for individuals to
|
||||
use, which is precisely where it is most unacceptable. Therefore, we
|
||||
have designed this version of the GPL to prohibit the practice for those
|
||||
products. If such problems arise substantially in other domains, we
|
||||
stand ready to extend this provision to those domains in future versions
|
||||
of the GPL, as needed to protect the freedom of users.
|
||||
|
||||
Finally, every program is threatened constantly by software patents.
|
||||
States should not allow patents to restrict development and use of
|
||||
software on general-purpose computers, but in those that do, we wish to
|
||||
avoid the special danger that patents applied to a free program could
|
||||
make it effectively proprietary. To prevent this, the GPL assures that
|
||||
patents cannot be used to render the program non-free.
|
||||
|
||||
The precise terms and conditions for copying, distribution and
|
||||
modification follow.
|
||||
|
||||
TERMS AND CONDITIONS
|
||||
|
||||
0. Definitions.
|
||||
|
||||
"This License" refers to version 3 of the GNU General Public License.
|
||||
|
||||
"Copyright" also means copyright-like laws that apply to other kinds of
|
||||
works, such as semiconductor masks.
|
||||
|
||||
"The Program" refers to any copyrightable work licensed under this
|
||||
License. Each licensee is addressed as "you". "Licensees" and
|
||||
"recipients" may be individuals or organizations.
|
||||
|
||||
To "modify" a work means to copy from or adapt all or part of the work
|
||||
in a fashion requiring copyright permission, other than the making of an
|
||||
exact copy. The resulting work is called a "modified version" of the
|
||||
earlier work or a work "based on" the earlier work.
|
||||
|
||||
A "covered work" means either the unmodified Program or a work based
|
||||
on the Program.
|
||||
|
||||
To "propagate" a work means to do anything with it that, without
|
||||
permission, would make you directly or secondarily liable for
|
||||
infringement under applicable copyright law, except executing it on a
|
||||
computer or modifying a private copy. Propagation includes copying,
|
||||
distribution (with or without modification), making available to the
|
||||
public, and in some countries other activities as well.
|
||||
|
||||
To "convey" a work means any kind of propagation that enables other
|
||||
parties to make or receive copies. Mere interaction with a user through
|
||||
a computer network, with no transfer of a copy, is not conveying.
|
||||
|
||||
An interactive user interface displays "Appropriate Legal Notices"
|
||||
to the extent that it includes a convenient and prominently visible
|
||||
feature that (1) displays an appropriate copyright notice, and (2)
|
||||
tells the user that there is no warranty for the work (except to the
|
||||
extent that warranties are provided), that licensees may convey the
|
||||
work under this License, and how to view a copy of this License. If
|
||||
the interface presents a list of user commands or options, such as a
|
||||
menu, a prominent item in the list meets this criterion.
|
||||
|
||||
1. Source Code.
|
||||
|
||||
The "source code" for a work means the preferred form of the work
|
||||
for making modifications to it. "Object code" means any non-source
|
||||
form of a work.
|
||||
|
||||
A "Standard Interface" means an interface that either is an official
|
||||
standard defined by a recognized standards body, or, in the case of
|
||||
interfaces specified for a particular programming language, one that
|
||||
is widely used among developers working in that language.
|
||||
|
||||
The "System Libraries" of an executable work include anything, other
|
||||
than the work as a whole, that (a) is included in the normal form of
|
||||
packaging a Major Component, but which is not part of that Major
|
||||
Component, and (b) serves only to enable use of the work with that
|
||||
Major Component, or to implement a Standard Interface for which an
|
||||
implementation is available to the public in source code form. A
|
||||
"Major Component", in this context, means a major essential component
|
||||
(kernel, window system, and so on) of the specific operating system
|
||||
(if any) on which the executable work runs, or a compiler used to
|
||||
produce the work, or an object code interpreter used to run it.
|
||||
|
||||
The "Corresponding Source" for a work in object code form means all
|
||||
the source code needed to generate, install, and (for an executable
|
||||
work) run the object code and to modify the work, including scripts to
|
||||
control those activities. However, it does not include the work's
|
||||
System Libraries, or general-purpose tools or generally available free
|
||||
programs which are used unmodified in performing those activities but
|
||||
which are not part of the work. For example, Corresponding Source
|
||||
includes interface definition files associated with source files for
|
||||
the work, and the source code for shared libraries and dynamically
|
||||
linked subprograms that the work is specifically designed to require,
|
||||
such as by intimate data communication or control flow between those
|
||||
subprograms and other parts of the work.
|
||||
|
||||
The Corresponding Source need not include anything that users
|
||||
can regenerate automatically from other parts of the Corresponding
|
||||
Source.
|
||||
|
||||
The Corresponding Source for a work in source code form is that
|
||||
same work.
|
||||
|
||||
2. Basic Permissions.
|
||||
|
||||
All rights granted under this License are granted for the term of
|
||||
copyright on the Program, and are irrevocable provided the stated
|
||||
conditions are met. This License explicitly affirms your unlimited
|
||||
permission to run the unmodified Program. The output from running a
|
||||
covered work is covered by this License only if the output, given its
|
||||
content, constitutes a covered work. This License acknowledges your
|
||||
rights of fair use or other equivalent, as provided by copyright law.
|
||||
|
||||
You may make, run and propagate covered works that you do not
|
||||
convey, without conditions so long as your license otherwise remains
|
||||
in force. You may convey covered works to others for the sole purpose
|
||||
of having them make modifications exclusively for you, or provide you
|
||||
with facilities for running those works, provided that you comply with
|
||||
the terms of this License in conveying all material for which you do
|
||||
not control copyright. Those thus making or running the covered works
|
||||
for you must do so exclusively on your behalf, under your direction
|
||||
and control, on terms that prohibit them from making any copies of
|
||||
your copyrighted material outside their relationship with you.
|
||||
|
||||
Conveying under any other circumstances is permitted solely under
|
||||
the conditions stated below. Sublicensing is not allowed; section 10
|
||||
makes it unnecessary.
|
||||
|
||||
3. Protecting Users' Legal Rights From Anti-Circumvention Law.
|
||||
|
||||
No covered work shall be deemed part of an effective technological
|
||||
measure under any applicable law fulfilling obligations under article
|
||||
11 of the WIPO copyright treaty adopted on 20 December 1996, or
|
||||
similar laws prohibiting or restricting circumvention of such
|
||||
measures.
|
||||
|
||||
When you convey a covered work, you waive any legal power to forbid
|
||||
circumvention of technological measures to the extent such circumvention
|
||||
is effected by exercising rights under this License with respect to
|
||||
the covered work, and you disclaim any intention to limit operation or
|
||||
modification of the work as a means of enforcing, against the work's
|
||||
users, your or third parties' legal rights to forbid circumvention of
|
||||
technological measures.
|
||||
|
||||
4. Conveying Verbatim Copies.
|
||||
|
||||
You may convey verbatim copies of the Program's source code as you
|
||||
receive it, in any medium, provided that you conspicuously and
|
||||
appropriately publish on each copy an appropriate copyright notice;
|
||||
keep intact all notices stating that this License and any
|
||||
non-permissive terms added in accord with section 7 apply to the code;
|
||||
keep intact all notices of the absence of any warranty; and give all
|
||||
recipients a copy of this License along with the Program.
|
||||
|
||||
You may charge any price or no price for each copy that you convey,
|
||||
and you may offer support or warranty protection for a fee.
|
||||
|
||||
5. Conveying Modified Source Versions.
|
||||
|
||||
You may convey a work based on the Program, or the modifications to
|
||||
produce it from the Program, in the form of source code under the
|
||||
terms of section 4, provided that you also meet all of these conditions:
|
||||
|
||||
a) The work must carry prominent notices stating that you modified
|
||||
it, and giving a relevant date.
|
||||
|
||||
b) The work must carry prominent notices stating that it is
|
||||
released under this License and any conditions added under section
|
||||
7. This requirement modifies the requirement in section 4 to
|
||||
"keep intact all notices".
|
||||
|
||||
c) You must license the entire work, as a whole, under this
|
||||
License to anyone who comes into possession of a copy. This
|
||||
License will therefore apply, along with any applicable section 7
|
||||
additional terms, to the whole of the work, and all its parts,
|
||||
regardless of how they are packaged. This License gives no
|
||||
permission to license the work in any other way, but it does not
|
||||
invalidate such permission if you have separately received it.
|
||||
|
||||
d) If the work has interactive user interfaces, each must display
|
||||
Appropriate Legal Notices; however, if the Program has interactive
|
||||
interfaces that do not display Appropriate Legal Notices, your
|
||||
work need not make them do so.
|
||||
|
||||
A compilation of a covered work with other separate and independent
|
||||
works, which are not by their nature extensions of the covered work,
|
||||
and which are not combined with it such as to form a larger program,
|
||||
in or on a volume of a storage or distribution medium, is called an
|
||||
"aggregate" if the compilation and its resulting copyright are not
|
||||
used to limit the access or legal rights of the compilation's users
|
||||
beyond what the individual works permit. Inclusion of a covered work
|
||||
in an aggregate does not cause this License to apply to the other
|
||||
parts of the aggregate.
|
||||
|
||||
6. Conveying Non-Source Forms.
|
||||
|
||||
You may convey a covered work in object code form under the terms
|
||||
of sections 4 and 5, provided that you also convey the
|
||||
machine-readable Corresponding Source under the terms of this License,
|
||||
in one of these ways:
|
||||
|
||||
a) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by the
|
||||
Corresponding Source fixed on a durable physical medium
|
||||
customarily used for software interchange.
|
||||
|
||||
b) Convey the object code in, or embodied in, a physical product
|
||||
(including a physical distribution medium), accompanied by a
|
||||
written offer, valid for at least three years and valid for as
|
||||
long as you offer spare parts or customer support for that product
|
||||
model, to give anyone who possesses the object code either (1) a
|
||||
copy of the Corresponding Source for all the software in the
|
||||
product that is covered by this License, on a durable physical
|
||||
medium customarily used for software interchange, for a price no
|
||||
more than your reasonable cost of physically performing this
|
||||
conveying of source, or (2) access to copy the
|
||||
Corresponding Source from a network server at no charge.
|
||||
|
||||
c) Convey individual copies of the object code with a copy of the
|
||||
written offer to provide the Corresponding Source. This
|
||||
alternative is allowed only occasionally and noncommercially, and
|
||||
only if you received the object code with such an offer, in accord
|
||||
with subsection 6b.
|
||||
|
||||
d) Convey the object code by offering access from a designated
|
||||
place (gratis or for a charge), and offer equivalent access to the
|
||||
Corresponding Source in the same way through the same place at no
|
||||
further charge. You need not require recipients to copy the
|
||||
Corresponding Source along with the object code. If the place to
|
||||
copy the object code is a network server, the Corresponding Source
|
||||
may be on a different server (operated by you or a third party)
|
||||
that supports equivalent copying facilities, provided you maintain
|
||||
clear directions next to the object code saying where to find the
|
||||
Corresponding Source. Regardless of what server hosts the
|
||||
Corresponding Source, you remain obligated to ensure that it is
|
||||
available for as long as needed to satisfy these requirements.
|
||||
|
||||
e) Convey the object code using peer-to-peer transmission, provided
|
||||
you inform other peers where the object code and Corresponding
|
||||
Source of the work are being offered to the general public at no
|
||||
charge under subsection 6d.
|
||||
|
||||
A separable portion of the object code, whose source code is excluded
|
||||
from the Corresponding Source as a System Library, need not be
|
||||
included in conveying the object code work.
|
||||
|
||||
A "User Product" is either (1) a "consumer product", which means any
|
||||
tangible personal property which is normally used for personal, family,
|
||||
or household purposes, or (2) anything designed or sold for incorporation
|
||||
into a dwelling. In determining whether a product is a consumer product,
|
||||
doubtful cases shall be resolved in favor of coverage. For a particular
|
||||
product received by a particular user, "normally used" refers to a
|
||||
typical or common use of that class of product, regardless of the status
|
||||
of the particular user or of the way in which the particular user
|
||||
actually uses, or expects or is expected to use, the product. A product
|
||||
is a consumer product regardless of whether the product has substantial
|
||||
commercial, industrial or non-consumer uses, unless such uses represent
|
||||
the only significant mode of use of the product.
|
||||
|
||||
"Installation Information" for a User Product means any methods,
|
||||
procedures, authorization keys, or other information required to install
|
||||
and execute modified versions of a covered work in that User Product from
|
||||
a modified version of its Corresponding Source. The information must
|
||||
suffice to ensure that the continued functioning of the modified object
|
||||
code is in no case prevented or interfered with solely because
|
||||
modification has been made.
|
||||
|
||||
If you convey an object code work under this section in, or with, or
|
||||
specifically for use in, a User Product, and the conveying occurs as
|
||||
part of a transaction in which the right of possession and use of the
|
||||
User Product is transferred to the recipient in perpetuity or for a
|
||||
fixed term (regardless of how the transaction is characterized), the
|
||||
Corresponding Source conveyed under this section must be accompanied
|
||||
by the Installation Information. But this requirement does not apply
|
||||
if neither you nor any third party retains the ability to install
|
||||
modified object code on the User Product (for example, the work has
|
||||
been installed in ROM).
|
||||
|
||||
The requirement to provide Installation Information does not include a
|
||||
requirement to continue to provide support service, warranty, or updates
|
||||
for a work that has been modified or installed by the recipient, or for
|
||||
the User Product in which it has been modified or installed. Access to a
|
||||
network may be denied when the modification itself materially and
|
||||
adversely affects the operation of the network or violates the rules and
|
||||
protocols for communication across the network.
|
||||
|
||||
Corresponding Source conveyed, and Installation Information provided,
|
||||
in accord with this section must be in a format that is publicly
|
||||
documented (and with an implementation available to the public in
|
||||
source code form), and must require no special password or key for
|
||||
unpacking, reading or copying.
|
||||
|
||||
7. Additional Terms.
|
||||
|
||||
"Additional permissions" are terms that supplement the terms of this
|
||||
License by making exceptions from one or more of its conditions.
|
||||
Additional permissions that are applicable to the entire Program shall
|
||||
be treated as though they were included in this License, to the extent
|
||||
that they are valid under applicable law. If additional permissions
|
||||
apply only to part of the Program, that part may be used separately
|
||||
under those permissions, but the entire Program remains governed by
|
||||
this License without regard to the additional permissions.
|
||||
|
||||
When you convey a copy of a covered work, you may at your option
|
||||
remove any additional permissions from that copy, or from any part of
|
||||
it. (Additional permissions may be written to require their own
|
||||
removal in certain cases when you modify the work.) You may place
|
||||
additional permissions on material, added by you to a covered work,
|
||||
for which you have or can give appropriate copyright permission.
|
||||
|
||||
Notwithstanding any other provision of this License, for material you
|
||||
add to a covered work, you may (if authorized by the copyright holders of
|
||||
that material) supplement the terms of this License with terms:
|
||||
|
||||
a) Disclaiming warranty or limiting liability differently from the
|
||||
terms of sections 15 and 16 of this License; or
|
||||
|
||||
b) Requiring preservation of specified reasonable legal notices or
|
||||
author attributions in that material or in the Appropriate Legal
|
||||
Notices displayed by works containing it; or
|
||||
|
||||
c) Prohibiting misrepresentation of the origin of that material, or
|
||||
requiring that modified versions of such material be marked in
|
||||
reasonable ways as different from the original version; or
|
||||
|
||||
d) Limiting the use for publicity purposes of names of licensors or
|
||||
authors of the material; or
|
||||
|
||||
e) Declining to grant rights under trademark law for use of some
|
||||
trade names, trademarks, or service marks; or
|
||||
|
||||
f) Requiring indemnification of licensors and authors of that
|
||||
material by anyone who conveys the material (or modified versions of
|
||||
it) with contractual assumptions of liability to the recipient, for
|
||||
any liability that these contractual assumptions directly impose on
|
||||
those licensors and authors.
|
||||
|
||||
All other non-permissive additional terms are considered "further
|
||||
restrictions" within the meaning of section 10. If the Program as you
|
||||
received it, or any part of it, contains a notice stating that it is
|
||||
governed by this License along with a term that is a further
|
||||
restriction, you may remove that term. If a license document contains
|
||||
a further restriction but permits relicensing or conveying under this
|
||||
License, you may add to a covered work material governed by the terms
|
||||
of that license document, provided that the further restriction does
|
||||
not survive such relicensing or conveying.
|
||||
|
||||
If you add terms to a covered work in accord with this section, you
|
||||
must place, in the relevant source files, a statement of the
|
||||
additional terms that apply to those files, or a notice indicating
|
||||
where to find the applicable terms.
|
||||
|
||||
Additional terms, permissive or non-permissive, may be stated in the
|
||||
form of a separately written license, or stated as exceptions;
|
||||
the above requirements apply either way.
|
||||
|
||||
8. Termination.
|
||||
|
||||
You may not propagate or modify a covered work except as expressly
|
||||
provided under this License. Any attempt otherwise to propagate or
|
||||
modify it is void, and will automatically terminate your rights under
|
||||
this License (including any patent licenses granted under the third
|
||||
paragraph of section 11).
|
||||
|
||||
However, if you cease all violation of this License, then your
|
||||
license from a particular copyright holder is reinstated (a)
|
||||
provisionally, unless and until the copyright holder explicitly and
|
||||
finally terminates your license, and (b) permanently, if the copyright
|
||||
holder fails to notify you of the violation by some reasonable means
|
||||
prior to 60 days after the cessation.
|
||||
|
||||
Moreover, your license from a particular copyright holder is
|
||||
reinstated permanently if the copyright holder notifies you of the
|
||||
violation by some reasonable means, this is the first time you have
|
||||
received notice of violation of this License (for any work) from that
|
||||
copyright holder, and you cure the violation prior to 30 days after
|
||||
your receipt of the notice.
|
||||
|
||||
Termination of your rights under this section does not terminate the
|
||||
licenses of parties who have received copies or rights from you under
|
||||
this License. If your rights have been terminated and not permanently
|
||||
reinstated, you do not qualify to receive new licenses for the same
|
||||
material under section 10.
|
||||
|
||||
9. Acceptance Not Required for Having Copies.
|
||||
|
||||
You are not required to accept this License in order to receive or
|
||||
run a copy of the Program. Ancillary propagation of a covered work
|
||||
occurring solely as a consequence of using peer-to-peer transmission
|
||||
to receive a copy likewise does not require acceptance. However,
|
||||
nothing other than this License grants you permission to propagate or
|
||||
modify any covered work. These actions infringe copyright if you do
|
||||
not accept this License. Therefore, by modifying or propagating a
|
||||
covered work, you indicate your acceptance of this License to do so.
|
||||
|
||||
10. Automatic Licensing of Downstream Recipients.
|
||||
|
||||
Each time you convey a covered work, the recipient automatically
|
||||
receives a license from the original licensors, to run, modify and
|
||||
propagate that work, subject to this License. You are not responsible
|
||||
for enforcing compliance by third parties with this License.
|
||||
|
||||
An "entity transaction" is a transaction transferring control of an
|
||||
organization, or substantially all assets of one, or subdividing an
|
||||
organization, or merging organizations. If propagation of a covered
|
||||
work results from an entity transaction, each party to that
|
||||
transaction who receives a copy of the work also receives whatever
|
||||
licenses to the work the party's predecessor in interest had or could
|
||||
give under the previous paragraph, plus a right to possession of the
|
||||
Corresponding Source of the work from the predecessor in interest, if
|
||||
the predecessor has it or can get it with reasonable efforts.
|
||||
|
||||
You may not impose any further restrictions on the exercise of the
|
||||
rights granted or affirmed under this License. For example, you may
|
||||
not impose a license fee, royalty, or other charge for exercise of
|
||||
rights granted under this License, and you may not initiate litigation
|
||||
(including a cross-claim or counterclaim in a lawsuit) alleging that
|
||||
any patent claim is infringed by making, using, selling, offering for
|
||||
sale, or importing the Program or any portion of it.
|
||||
|
||||
11. Patents.
|
||||
|
||||
A "contributor" is a copyright holder who authorizes use under this
|
||||
License of the Program or a work on which the Program is based. The
|
||||
work thus licensed is called the contributor's "contributor version".
|
||||
|
||||
A contributor's "essential patent claims" are all patent claims
|
||||
owned or controlled by the contributor, whether already acquired or
|
||||
hereafter acquired, that would be infringed by some manner, permitted
|
||||
by this License, of making, using, or selling its contributor version,
|
||||
but do not include claims that would be infringed only as a
|
||||
consequence of further modification of the contributor version. For
|
||||
purposes of this definition, "control" includes the right to grant
|
||||
patent sublicenses in a manner consistent with the requirements of
|
||||
this License.
|
||||
|
||||
Each contributor grants you a non-exclusive, worldwide, royalty-free
|
||||
patent license under the contributor's essential patent claims, to
|
||||
make, use, sell, offer for sale, import and otherwise run, modify and
|
||||
propagate the contents of its contributor version.
|
||||
|
||||
In the following three paragraphs, a "patent license" is any express
|
||||
agreement or commitment, however denominated, not to enforce a patent
|
||||
(such as an express permission to practice a patent or covenant not to
|
||||
sue for patent infringement). To "grant" such a patent license to a
|
||||
party means to make such an agreement or commitment not to enforce a
|
||||
patent against the party.
|
||||
|
||||
If you convey a covered work, knowingly relying on a patent license,
|
||||
and the Corresponding Source of the work is not available for anyone
|
||||
to copy, free of charge and under the terms of this License, through a
|
||||
publicly available network server or other readily accessible means,
|
||||
then you must either (1) cause the Corresponding Source to be so
|
||||
available, or (2) arrange to deprive yourself of the benefit of the
|
||||
patent license for this particular work, or (3) arrange, in a manner
|
||||
consistent with the requirements of this License, to extend the patent
|
||||
license to downstream recipients. "Knowingly relying" means you have
|
||||
actual knowledge that, but for the patent license, your conveying the
|
||||
covered work in a country, or your recipient's use of the covered work
|
||||
in a country, would infringe one or more identifiable patents in that
|
||||
country that you have reason to believe are valid.
|
||||
|
||||
If, pursuant to or in connection with a single transaction or
|
||||
arrangement, you convey, or propagate by procuring conveyance of, a
|
||||
covered work, and grant a patent license to some of the parties
|
||||
receiving the covered work authorizing them to use, propagate, modify
|
||||
or convey a specific copy of the covered work, then the patent license
|
||||
you grant is automatically extended to all recipients of the covered
|
||||
work and works based on it.
|
||||
|
||||
A patent license is "discriminatory" if it does not include within
|
||||
the scope of its coverage, prohibits the exercise of, or is
|
||||
conditioned on the non-exercise of one or more of the rights that are
|
||||
specifically granted under this License. You may not convey a covered
|
||||
work if you are a party to an arrangement with a third party that is
|
||||
in the business of distributing software, under which you make payment
|
||||
to the third party based on the extent of your activity of conveying
|
||||
the work, and under which the third party grants, to any of the
|
||||
parties who would receive the covered work from you, a discriminatory
|
||||
patent license (a) in connection with copies of the covered work
|
||||
conveyed by you (or copies made from those copies), or (b) primarily
|
||||
for and in connection with specific products or compilations that
|
||||
contain the covered work, unless you entered into that arrangement,
|
||||
or that patent license was granted, prior to 28 March 2007.
|
||||
|
||||
Nothing in this License shall be construed as excluding or limiting
|
||||
any implied license or other defenses to infringement that may
|
||||
otherwise be available to you under applicable patent law.
|
||||
|
||||
12. No Surrender of Others' Freedom.
|
||||
|
||||
If conditions are imposed on you (whether by court order, agreement or
|
||||
otherwise) that contradict the conditions of this License, they do not
|
||||
excuse you from the conditions of this License. If you cannot convey a
|
||||
covered work so as to satisfy simultaneously your obligations under this
|
||||
License and any other pertinent obligations, then as a consequence you may
|
||||
not convey it at all. For example, if you agree to terms that obligate you
|
||||
to collect a royalty for further conveying from those to whom you convey
|
||||
the Program, the only way you could satisfy both those terms and this
|
||||
License would be to refrain entirely from conveying the Program.
|
||||
|
||||
13. Use with the GNU Affero General Public License.
|
||||
|
||||
Notwithstanding any other provision of this License, you have
|
||||
permission to link or combine any covered work with a work licensed
|
||||
under version 3 of the GNU Affero General Public License into a single
|
||||
combined work, and to convey the resulting work. The terms of this
|
||||
License will continue to apply to the part which is the covered work,
|
||||
but the special requirements of the GNU Affero General Public License,
|
||||
section 13, concerning interaction through a network will apply to the
|
||||
combination as such.
|
||||
|
||||
14. Revised Versions of this License.
|
||||
|
||||
The Free Software Foundation may publish revised and/or new versions of
|
||||
the GNU General Public License from time to time. Such new versions will
|
||||
be similar in spirit to the present version, but may differ in detail to
|
||||
address new problems or concerns.
|
||||
|
||||
Each version is given a distinguishing version number. If the
|
||||
Program specifies that a certain numbered version of the GNU General
|
||||
Public License "or any later version" applies to it, you have the
|
||||
option of following the terms and conditions either of that numbered
|
||||
version or of any later version published by the Free Software
|
||||
Foundation. If the Program does not specify a version number of the
|
||||
GNU General Public License, you may choose any version ever published
|
||||
by the Free Software Foundation.
|
||||
|
||||
If the Program specifies that a proxy can decide which future
|
||||
versions of the GNU General Public License can be used, that proxy's
|
||||
public statement of acceptance of a version permanently authorizes you
|
||||
to choose that version for the Program.
|
||||
|
||||
Later license versions may give you additional or different
|
||||
permissions. However, no additional obligations are imposed on any
|
||||
author or copyright holder as a result of your choosing to follow a
|
||||
later version.
|
||||
|
||||
15. Disclaimer of Warranty.
|
||||
|
||||
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
|
||||
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
|
||||
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
|
||||
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
|
||||
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
|
||||
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
|
||||
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
|
||||
|
||||
16. Limitation of Liability.
|
||||
|
||||
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
|
||||
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
|
||||
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
|
||||
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
|
||||
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
|
||||
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
|
||||
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
|
||||
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
|
||||
SUCH DAMAGES.
|
||||
|
||||
17. Interpretation of Sections 15 and 16.
|
||||
|
||||
If the disclaimer of warranty and limitation of liability provided
|
||||
above cannot be given local legal effect according to their terms,
|
||||
reviewing courts shall apply local law that most closely approximates
|
||||
an absolute waiver of all civil liability in connection with the
|
||||
Program, unless a warranty or assumption of liability accompanies a
|
||||
copy of the Program in return for a fee.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
How to Apply These Terms to Your New Programs
|
||||
|
||||
If you develop a new program, and you want it to be of the greatest
|
||||
possible use to the public, the best way to achieve this is to make it
|
||||
free software which everyone can redistribute and change under these terms.
|
||||
|
||||
To do so, attach the following notices to the program. It is safest
|
||||
to attach them to the start of each source file to most effectively
|
||||
state the exclusion of warranty; and each file should have at least
|
||||
the "copyright" line and a pointer to where the full notice is found.
|
||||
|
||||
<one line to give the program's name and a brief idea of what it does.>
|
||||
Copyright (C) <year> <name of author>
|
||||
|
||||
This program is free software: you can redistribute it and/or modify
|
||||
it under the terms of the GNU General Public License as published by
|
||||
the Free Software Foundation, either version 3 of the License, or
|
||||
(at your option) any later version.
|
||||
|
||||
This program is distributed in the hope that it will be useful,
|
||||
but WITHOUT ANY WARRANTY; without even the implied warranty of
|
||||
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
|
||||
GNU General Public License for more details.
|
||||
|
||||
You should have received a copy of the GNU General Public License
|
||||
along with this program. If not, see <http://www.gnu.org/licenses/>.
|
||||
|
||||
Also add information on how to contact you by electronic and paper mail.
|
||||
|
||||
If the program does terminal interaction, make it output a short
|
||||
notice like this when it starts in an interactive mode:
|
||||
|
||||
{{ project }} Copyright (C) {{ year }} {{ organization }}
|
||||
This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'.
|
||||
This is free software, and you are welcome to redistribute it
|
||||
under certain conditions; type `show c' for details.
|
||||
|
||||
The hypothetical commands `show w' and `show c' should show the appropriate
|
||||
parts of the General Public License. Of course, your program's commands
|
||||
might be different; for a GUI interface, you would use an "about box".
|
||||
|
||||
You should also get your employer (if you work as a programmer) or school,
|
||||
if any, to sign a "copyright disclaimer" for the program, if necessary.
|
||||
For more information on this, and how to apply and follow the GNU GPL, see
|
||||
<http://www.gnu.org/licenses/>.
|
||||
|
||||
The GNU General Public License does not permit incorporating your program
|
||||
into proprietary programs. If your program is a subroutine library, you
|
||||
may consider it more useful to permit linking proprietary applications with
|
||||
the library. If this is what you want to do, use the GNU Lesser General
|
||||
Public License instead of this License. But first, please read
|
||||
<http://www.gnu.org/philosophy/why-not-lgpl.html>.
|
||||
114
README.md
Normal file
114
README.md
Normal file
@ -0,0 +1,114 @@
|
||||
# Preproccessing HMC longitudinal study
|
||||
|
||||
[](https://www.gnu.org/licenses/gpl-3.0)
|
||||
|
||||
This is a Python application designed to preprocess data from the HMC longitudinal study.
|
||||
It focuses on transforming survey data into a structured format suitable for analysis by employing yaml configuration files that document the different questionnaires employed over the waves.
|
||||
|
||||
## Features
|
||||
- Flexible data preprocessing using YAML configuration files
|
||||
- Automatic generation of database documentation (Markdown and PDF)
|
||||
- Support for multiple output formats (CSV, SQLite)
|
||||
- Processing and validation of scales and composite scores across multiple survey waves
|
||||
- Modular architecture for easy extensibility
|
||||
|
||||
## Installation
|
||||
|
||||
Clone the repository and install using pip:
|
||||
|
||||
```bash
|
||||
git clone https://gitea.iwm-tuebingen.de/HMC/preprocessing.git
|
||||
cd preprocessing
|
||||
pip install .
|
||||
```
|
||||
|
||||
This uses the `pyproject.toml` file for all dependencies and build instructions.
|
||||
Note that the project requires Python 3.10 or higher and use of [virtual environments](https://docs.python.org/3/library/venv.html) is recommended.
|
||||
|
||||
## Usage
|
||||
|
||||
### 1. Global Settings
|
||||
|
||||
In order to use the project to process the data you need to first create the global settings file `settings.yaml` in the root directory of the project.
|
||||
You can follow the example provided in `settings_example.yaml` to create your own settings file.
|
||||
The main settings are used to define the location of the configuration and data files.
|
||||
|
||||
### 2. Configuration Files
|
||||
|
||||
The project uses YAML configuration files to define the structure of the questionnaires and the processing steps.
|
||||
Check the `config` directory to make sure that all required questionnaires are present.
|
||||
|
||||
### 3. Running the Preprocessing
|
||||
|
||||
To run the preprocessing, you can use the command line interface:
|
||||
|
||||
```bash
|
||||
python HMC_preprocessing.py
|
||||
```
|
||||
|
||||
### Output
|
||||
|
||||
The preprocessing will generate several output files:
|
||||
- sqlite database file `hmc_data.db` containing the processed data in a denormalized format.
|
||||
- markdown documentation `database_api_reference.md` containing the documentation of the database schema.
|
||||
- pdf documentation `database_api_reference.pdf` containing the documentation of the database schema.
|
||||
|
||||
Furthermore each wave will be exported as a separate csv or Excel file in the `results` directory.
|
||||
|
||||
|
||||
## Code Structure
|
||||
|
||||
Our approach centers on the flexible processing of a wide range of psychological scales and composite measures across multiple survey waves.
|
||||
The project leverages YAML configuration files to describe the structure, scoring, and validation rules for each questionnaire,
|
||||
allowing new scales or response formats to be integrated with minimal code changes.
|
||||
|
||||
For each wave, the system reads the relevant configuration, imports the raw data, and applies the specified processing logic,
|
||||
such as item inversion, custom scoring, and subgroup filtering—entirely based on the configuration.
|
||||
This enables researchers to adapt the pipeline to evolving study designs or new measurement instruments without modifying the core codebase.
|
||||
|
||||
Processed data from all waves is consolidated into a unified database, and the schema is automatically documented.
|
||||
The modular design ensures that each step—from data import to scale computation and documentation is
|
||||
transparent, reproducible, and easily extensible for future requirements.
|
||||
|
||||
In order to achieve this the following modules are used:
|
||||
- `settings_loader.py`: Loads and validates global settings
|
||||
- `data_loader.py`: Imports raw survey data
|
||||
- `scale_processor.py`: Processes individual scales
|
||||
- `composite_processor.py`: Computes composite scores (e.g. for the user and non-user groups)
|
||||
- `process_all_waves.py`: Orchestrates processing across all waves
|
||||
- `database_populator.py`: Exports processed data to the database
|
||||
- `database_documentation_generator.py`: Generates database documentation
|
||||
- `logging_config.py`: Configures logging for the entire process
|
||||
|
||||
## Additional Information
|
||||
|
||||
- if a combined scales is used (combining user and no_user items), the group-specific scales are not included in the dataset
|
||||
- boolean columns are saved as 0/1 in csv and excel files for better interoperability
|
||||
- if single items are retained, they are named as {scale_name}-item_{item_number}
|
||||
- in impact_of_delegation_on_skills 6 is coded as NA as it does not fit the answer
|
||||
- hope and concern is coded to have higher values for higher hope
|
||||
- in wave 1 delegation_comfort item 3 was always NA. > no cronbachs alpha
|
||||
|
||||
## Contributing
|
||||
|
||||
In order to contribute please follow these steps:
|
||||
Before making a pull request:
|
||||
- Please review the CONTRIBUTING.md guidelines.
|
||||
- Add corresponding unit or integration tests for new code.
|
||||
- Ensure your code passes linting and typing checks (black, ruff, mypy).
|
||||
|
||||
For development, you can install the package and all dev dependencies in editable mode:
|
||||
|
||||
```bash
|
||||
pip install -e .[dev]
|
||||
```
|
||||
|
||||
Note that the project uses [pre-commit](https://pre-commit.com/) to ensure code quality and consistency,
|
||||
and that the main branch is protected to ensure that all tests pass before merging.
|
||||
|
||||
## License
|
||||
|
||||
This code in this project is licensed under the GNU General Public License v3.0. See the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## Contact
|
||||
For questions, feedback, or to report issues, please contact Gerrit Anders at g.anders@iwm-tuebingen.de.
|
||||
51
config/questionnaires/agency.yaml
Normal file
51
config/questionnaires/agency.yaml
Normal file
@ -0,0 +1,51 @@
|
||||
questionnaire: "agency"
|
||||
scales:
|
||||
- name: "agency_favorite_ai"
|
||||
label: "Perceived Agency of Favorite AI system"
|
||||
items:
|
||||
- id: "agency_User_fav_1"
|
||||
text: "(piped fav AI) can create new goals."
|
||||
inverse: false
|
||||
- id: "agency_User_fav_2"
|
||||
text: "(piped fav AI) can communicate with people."
|
||||
inverse: false
|
||||
- id: "agency_User_fav_3"
|
||||
text: "(piped fav AI) can show emotions to other people."
|
||||
inverse: false
|
||||
- id: "agency_User_fav_4"
|
||||
text: "(piped fav AI) can change their behavior based on how people treat them."
|
||||
inverse: false
|
||||
- id: "agency_User_fav_5"
|
||||
text: "(piped fav AI) can adapt to different situations."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "agency_favorite_ai"
|
||||
reference: "self"
|
||||
|
||||
- name: "agency_no_user"
|
||||
label: "Perceived Agency of AI in General"
|
||||
items:
|
||||
- id: "agency_noUser_1"
|
||||
text: "AI can create new goals."
|
||||
inverse: false
|
||||
- id: "agency_noUser_2"
|
||||
text: "AI can communicate with people."
|
||||
inverse: false
|
||||
- id: "agency_noUser_3"
|
||||
text: "AI can show emotions to other people."
|
||||
inverse: false
|
||||
- id: "agency_noUser_4"
|
||||
text: "AI can change their behavior based on how people treat them."
|
||||
inverse: false
|
||||
- id: "agency_noUser_5"
|
||||
text: "AI can adapt to different situations."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "agency_ai_no_user"
|
||||
reference: "self"
|
||||
11
config/questionnaires/ai_adoption_factors_open_question.yaml
Normal file
11
config/questionnaires/ai_adoption_factors_open_question.yaml
Normal file
@ -0,0 +1,11 @@
|
||||
questionnaire: "ai_adoption_factors_open_question"
|
||||
scales:
|
||||
- name: "ai_adoption_factors_open_question"
|
||||
label: "Changes needed for AI adoption"
|
||||
items:
|
||||
- id: "wht_need_chnge_noUser"
|
||||
text: "What would need to change for you to change your opinion and start using AI-based services on a regular basis?"
|
||||
format: "response"
|
||||
calculation: "response"
|
||||
output: "ai_adoption_factors_open_question"
|
||||
reference: "self"
|
||||
23
config/questionnaires/ai_aversion.yaml
Normal file
23
config/questionnaires/ai_aversion.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "ai_aversion"
|
||||
scales:
|
||||
- name: "ai_aversion_no_user"
|
||||
label: "Aversion Toward AI Among Non-Users"
|
||||
items:
|
||||
- id: "AI_aversion_noUser_1"
|
||||
text: "I feel uneasy when I think about integrating AI systems into my daily activities."
|
||||
inverse: false
|
||||
- id: "AI_aversion_noUser_2"
|
||||
text: "I believe that the benefits of AI are exaggerated by technology companies."
|
||||
inverse: false
|
||||
- id: "AI_aversion_noUser_3"
|
||||
text: "I prefer relying on human judgment rather than on AI-driven processes."
|
||||
inverse: false
|
||||
- id: "AI_aversion_noUser_4"
|
||||
text: "I find the general hype surrounding AI to be excessive."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "ai_aversion_no_user"
|
||||
reference: "self"
|
||||
13
config/questionnaires/apple_use.yaml
Normal file
13
config/questionnaires/apple_use.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
questionnaire: "apple_use"
|
||||
scales:
|
||||
- name: "apple_use"
|
||||
label: "Current Use of Apple Device"
|
||||
items:
|
||||
- id: "apple_use"
|
||||
text: "Do you currently use an Apple device?"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Yes"
|
||||
"2": "No"
|
||||
output: "apple_use"
|
||||
reference: "self"
|
||||
47
config/questionnaires/attitudes.yaml
Normal file
47
config/questionnaires/attitudes.yaml
Normal file
@ -0,0 +1,47 @@
|
||||
questionnaire: "attitudes"
|
||||
scales:
|
||||
- name: "attitudes"
|
||||
label: "Attitudes toward AI in general (ATTARI-12)"
|
||||
items:
|
||||
- id: "attitudes_1"
|
||||
text: "AI will make this world a better place."
|
||||
inverse: false
|
||||
- id: "attitudes_2"
|
||||
text: "I have strong negative emotions about AI."
|
||||
inverse: true
|
||||
- id: "attitudes_3"
|
||||
text: "I want to use technologies that rely on AI."
|
||||
inverse: false
|
||||
- id: "attitudes_4"
|
||||
text: "AI has more disadvantages than advantages."
|
||||
inverse: true
|
||||
- id: "attitudes_5"
|
||||
text: "I look forward to future AI developments."
|
||||
inverse: false
|
||||
- id: "attitudes_6"
|
||||
text: "AI offers solutions to many world problems."
|
||||
inverse: false
|
||||
- id: "attitudes_7"
|
||||
text: "I prefer technologies that do not feature AI."
|
||||
inverse: true
|
||||
- id: "attitudes_8"
|
||||
text: "I am afraid of AI."
|
||||
inverse: true
|
||||
- id: "attitudes_9"
|
||||
text: "I would rather choose a technology with AI than one without it."
|
||||
inverse: false
|
||||
- id: "attitudes_10"
|
||||
text: "AI creates problems rather than solving them."
|
||||
inverse: true
|
||||
- id: "attitudes_11"
|
||||
text: "When I think about AI, I have mostly positive feelings."
|
||||
inverse: false
|
||||
- id: "attitudes_12"
|
||||
text: "I would rather avoid technologies that are based on AI."
|
||||
inverse: true
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 5 = strongly agree"
|
||||
output: "attitudes"
|
||||
reference: "Stein et al., 2024"
|
||||
41
config/questionnaires/attitudes_toward_ai_decisions.yaml
Normal file
41
config/questionnaires/attitudes_toward_ai_decisions.yaml
Normal file
@ -0,0 +1,41 @@
|
||||
questionnaire: "attitudes_toward_ai_decisions"
|
||||
scales:
|
||||
- name: "attitudes_toward_ai_decisions"
|
||||
label: "Attitudes Toward AI in Decision-Making"
|
||||
items:
|
||||
- id: "dec_mkng_1"
|
||||
text: "If given the opportunity, I would let an AI system make most of my decisions."
|
||||
inverse: false
|
||||
- id: "dec_mkng_2"
|
||||
text: "I think AI systems will eventually make better decisions than humans in most areas."
|
||||
inverse: false
|
||||
- id: "dec_mkng_3"
|
||||
text: "I would feel relieved if AI systems could take over decision-making responsibilities in my life."
|
||||
inverse: false
|
||||
- id: "dec_mkng_4"
|
||||
text: "I would be willing to let an AI system learn my preferences over time to make better decisions for me."
|
||||
inverse: false
|
||||
- id: "dec_mkng_5"
|
||||
text: "In case I cannot, I would trust an AI system to make important financial decisions on my behalf."
|
||||
inverse: false
|
||||
- id: "dec_mkng_6"
|
||||
text: "In case I cannot, I would trust an AI system to make important medical decisions on my behalf."
|
||||
inverse: false
|
||||
- id: "dec_mkng_7"
|
||||
text: "In case I cannot, I would trust an AI system to make important legal decisions on my behalf."
|
||||
inverse: false
|
||||
- id: "dec_mkng_8"
|
||||
text: "I would prefer to delegate routine decision-making tasks to an AI system."
|
||||
inverse: false
|
||||
- id: "dec_mkng_9"
|
||||
text: "Using AI to make decisions saves me time and effort."
|
||||
inverse: false
|
||||
- id: "dec_mkng_10"
|
||||
text: "I am comfortable letting AI systems choose the best options for me in daily life (e.g., what to buy, where to eat)."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neither disagree nor agree, 4 = Somewhat agree, 5 = Strongly agree"
|
||||
output: "attitudes_toward_ai_decisions"
|
||||
reference: "self"
|
||||
41
config/questionnaires/attitudes_toward_disclosure.yaml
Normal file
41
config/questionnaires/attitudes_toward_disclosure.yaml
Normal file
@ -0,0 +1,41 @@
|
||||
questionnaire: "attitudes_toward_disclosure"
|
||||
scales:
|
||||
- name: "attitudes_toward_disclosure"
|
||||
label: "Attitudes Toward AI Data Disclosure and Privacy"
|
||||
items:
|
||||
- id: "disclosure_1"
|
||||
text: "I feel that AI practices are an invasion of privacy."
|
||||
inverse: false
|
||||
- id: "disclosure_2"
|
||||
text: "I feel uncomfortable about the types of information that AI collects."
|
||||
inverse: false
|
||||
- id: "disclosure_3"
|
||||
text: "The way that AI monitors its users makes me feel uneasy."
|
||||
inverse: false
|
||||
- id: "disclosure_4"
|
||||
text: "I feel personally invaded by the methods used by AI to collect information."
|
||||
inverse: false
|
||||
- id: "disclosure_5"
|
||||
text: "I am concerned about my privacy when using AI."
|
||||
inverse: false
|
||||
- id: "disclosure_6"
|
||||
text: "I would find it acceptable if AI records and uses information about my usage behavior."
|
||||
inverse: true
|
||||
- id: "disclosure_7"
|
||||
text: "I would provide AI access to information about me that is stored in or collected by other technological applications or systems."
|
||||
inverse: true
|
||||
- id: "disclosure_8"
|
||||
text: "I would provide a lot of information to AI about things that represent me personally."
|
||||
inverse: true
|
||||
- id: "disclosure_9"
|
||||
text: "I would find it acceptable if AI had a detailed profile of my person."
|
||||
inverse: true
|
||||
- id: "disclosure_10"
|
||||
text: "I would give AI access to a lot of information that would characterize me as a person."
|
||||
inverse: true
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neutral, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "attitudes_toward_disclosure"
|
||||
reference: "Gieselmann, M., & Sassenberg, K. (2023); https://doi.org/10.1177/08944393221142787"
|
||||
23
config/questionnaires/attitudes_usage.yaml
Normal file
23
config/questionnaires/attitudes_usage.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "attitudes_usage"
|
||||
scales:
|
||||
- name: "attitudes_usage"
|
||||
label: "Attitude toward usage (UT scale)"
|
||||
items:
|
||||
- id: "attitudes_usage_1"
|
||||
text: "I would rather avoid technologies that are based on AI."
|
||||
inverse: true
|
||||
- id: "attitudes_usage_2"
|
||||
text: "AI makes work more interesting."
|
||||
inverse: false
|
||||
- id: "attitudes_usage_3"
|
||||
text: "Working with AI is fun."
|
||||
inverse: false
|
||||
- id: "attitudes_usage_4"
|
||||
text: "I like working with AI."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = absolutely disagree, 5 = absolutely agree"
|
||||
output: "attitudes_usage"
|
||||
reference: "Venkatesh et al., 2009"
|
||||
23
config/questionnaires/barrier_for_use.yaml
Normal file
23
config/questionnaires/barrier_for_use.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "barrier_for_use"
|
||||
scales:
|
||||
- name: "barrier_for_use"
|
||||
label: "Barriers to Using AI More Frequently"
|
||||
items:
|
||||
- id: "barrier_1"
|
||||
text: "I avoid AI because I worry about errors."
|
||||
inverse: false
|
||||
- id: "barrier_2"
|
||||
text: "I avoid AI because it feels too complicated."
|
||||
inverse: false
|
||||
- id: "barrier_3"
|
||||
text: "I avoid AI because I don’t trust it."
|
||||
inverse: false
|
||||
- id: "barrier_4"
|
||||
text: "I avoid AI because I fear it will replace my skills."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "barrier_for_use"
|
||||
reference: "self"
|
||||
112
config/questionnaires/bigfive.yaml
Normal file
112
config/questionnaires/bigfive.yaml
Normal file
@ -0,0 +1,112 @@
|
||||
questionnaire: "bigfive"
|
||||
scales:
|
||||
- name: "bigfive_extraversion"
|
||||
label: "Big Five Extraversion"
|
||||
items:
|
||||
- id: "bigfive_1"
|
||||
text: "I am talkative"
|
||||
inverse: false
|
||||
- id: "bigfive_2"
|
||||
text: "I have a tendency to be quiet"
|
||||
inverse: true
|
||||
- id: "bigfive_3"
|
||||
text: "I can be shy and inhibited"
|
||||
inverse: true
|
||||
- id: "bigfive_4"
|
||||
text: "I am outgoing and social"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = disagree, 2 = slightly disagree, 3 = Neutral, 4 = slightly agree, 5 = agree"
|
||||
output: "bigfive_extraversion"
|
||||
reference: "Veloso Gouveia et al, 2021"
|
||||
|
||||
- name: "bigfive_agreeableness"
|
||||
label: "Big Five Agreeableness"
|
||||
items:
|
||||
- id: "bigfive_5"
|
||||
text: "I am helpful and selfless towards others"
|
||||
inverse: false
|
||||
- id: "bigfive_6"
|
||||
text: "I can be cold and distant"
|
||||
inverse: true
|
||||
- id: "bigfive_7"
|
||||
text: "I am considerate towards most people"
|
||||
inverse: false
|
||||
- id: "bigfive_8"
|
||||
text: "I can be impolite sometimes"
|
||||
inverse: true
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = disagree, 2 = slightly disagree, 3 = Neutral, 4 = slightly agree, 5 = agree"
|
||||
output: "bigfive_agreeableness"
|
||||
reference: "Veloso Gouveia et al, 2021"
|
||||
|
||||
- name: "bigfive_conscientiousness"
|
||||
label: "Big Five Conscientiousness"
|
||||
items:
|
||||
- id: "bigfive_9"
|
||||
text: "I work thoroughly"
|
||||
inverse: false
|
||||
- id: "bigfive_10"
|
||||
text: "I can be careless"
|
||||
inverse: true
|
||||
- id: "bigfive_11"
|
||||
text: "I have a tendency to have little order in my life"
|
||||
inverse: true
|
||||
- id: "bigfive_12"
|
||||
text: "I make plans and follow through"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = disagree, 2 = slightly disagree, 3 = Neutral, 4 = slightly agree, 5 = agree"
|
||||
output: "bigfive_conscientiousness"
|
||||
reference: "Veloso Gouveia et al, 2021"
|
||||
|
||||
- name: "bigfive_neuroticism"
|
||||
label: "Big Five Neuroticism"
|
||||
items:
|
||||
- id: "bigfive_13"
|
||||
text: "I am depressed"
|
||||
inverse: false
|
||||
- id: "bigfive_14"
|
||||
text: "I am relaxed, manage stress well"
|
||||
inverse: true
|
||||
- id: "bigfive_15"
|
||||
text: "I worry a lot"
|
||||
inverse: false
|
||||
- id: "bigfive_16"
|
||||
text: "I easily get nervous"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = disagree, 2 = slightly disagree, 3 = Neutral, 4 = slightly agree, 5 = agree"
|
||||
output: "bigfive_neuroticism"
|
||||
reference: "Veloso Gouveia et al, 2021"
|
||||
|
||||
|
||||
- name: "bigfive_openness"
|
||||
label: "Big Five Openness"
|
||||
items:
|
||||
- id: "bigfive_17"
|
||||
text: "I am original, have new ideas"
|
||||
inverse: false
|
||||
- id: "bigfive_18"
|
||||
text: "I have a vivid imagination"
|
||||
inverse: false
|
||||
- id: "bigfive_19"
|
||||
text: "I like to speculate, play with ideas"
|
||||
inverse: false
|
||||
- id: "bigfive_20"
|
||||
text: "I have few artistic interests"
|
||||
inverse: true
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = disagree, 2 = slightly disagree, 3 = Neutral, 4 = slightly agree, 5 = agree"
|
||||
output: "bigfive_openness"
|
||||
reference: "Veloso Gouveia et al, 2021"
|
||||
14
config/questionnaires/change_in_writing_without_ai.yaml
Normal file
14
config/questionnaires/change_in_writing_without_ai.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
questionnaire: "change_in_writing_without_ai"
|
||||
scales:
|
||||
- name: "change_in_writing_without_ai"
|
||||
label: "Change in Time Spent Writing Without AI"
|
||||
items:
|
||||
- id: "chng_wrtg_delg"
|
||||
text: "Over the past year, has the amount of time you spend writing without AI increased, decreased, or stayed the same?"
|
||||
calculation: "ordinal"
|
||||
response_options:
|
||||
"1": "Decreased"
|
||||
"2": "Stayed the same"
|
||||
"3": "Increased"
|
||||
output: "change_in_writing_without_ai"
|
||||
reference: "self"
|
||||
23
config/questionnaires/change_of_personal_role.yaml
Normal file
23
config/questionnaires/change_of_personal_role.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "change_of_personal_role"
|
||||
scales:
|
||||
- name: "change_of_personal_role"
|
||||
label: "Perceived Personal Role and Adaptation to AI"
|
||||
items:
|
||||
- id: "personalrole_1"
|
||||
text: "I believe AI will change my role in society."
|
||||
inverse: false
|
||||
- id: "personalrole_2"
|
||||
text: "I think I can use AI to improve my work or life."
|
||||
inverse: false
|
||||
- id: "personalrole_3"
|
||||
text: "I feel powerless to influence how AI affects me."
|
||||
inverse: false
|
||||
- id: "personalrole_4"
|
||||
text: "I expect to adapt my skills because of AI."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "change_of_personal_role"
|
||||
reference: "self"
|
||||
21
config/questionnaires/closeness.yaml
Normal file
21
config/questionnaires/closeness.yaml
Normal file
@ -0,0 +1,21 @@
|
||||
questionnaire: "closeness"
|
||||
scales:
|
||||
- name: "closeness_favorite_ai"
|
||||
label: "Perceived Closeness to Favorite AI"
|
||||
items:
|
||||
- id: "closeness_User_fav"
|
||||
text: "Please select the image that best represents how you perceive the relation you have with (piped fav AI)."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Image Selection"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "no_overlap"
|
||||
"2": "slight_overlap"
|
||||
"3": "some_overlap"
|
||||
"4": "moderate_overlap"
|
||||
"5": "strong_overlap"
|
||||
"6": "very_strong_overlap"
|
||||
"7": "almost_complete_overlap"
|
||||
output: "closeness_favorite_ai"
|
||||
reference: "Aron et al. (1992); https://doi.org/10.1037/0022-3514.63.4.596"
|
||||
74
config/questionnaires/cognitiv_selfesteem.yaml
Normal file
74
config/questionnaires/cognitiv_selfesteem.yaml
Normal file
@ -0,0 +1,74 @@
|
||||
questionnaire: "cognitive selfesteem"
|
||||
scales:
|
||||
- name: "cognitive_selfesteem_thinking"
|
||||
label: "cognitive selfesteem thinking"
|
||||
items:
|
||||
- id: "self_esteem_1"
|
||||
text: "I am smart"
|
||||
inverse: false
|
||||
- id: "self_esteem_2"
|
||||
text: "I am smarter than the average person"
|
||||
inverse: false
|
||||
- id: "self_esteem_3"
|
||||
text: "My mind is one of my best qualities"
|
||||
inverse: false
|
||||
- id: "self_esteem_4"
|
||||
text: "I am good at thinking"
|
||||
inverse: false
|
||||
- id: "self_esteem_5"
|
||||
text: "I feel good about my ability to think through problems"
|
||||
inverse: false
|
||||
- id: "self_esteem_6"
|
||||
text: "I am capable of solving most problems without outside help"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "cognitive_selfesteem_thinking"
|
||||
reference: "Ward (2013)"
|
||||
|
||||
|
||||
- name: "cognitive_selfesteem_memory"
|
||||
label: "cognitive_selfesteem memory"
|
||||
items:
|
||||
- id: "self_esteem_7"
|
||||
text: "I am proud of my memory"
|
||||
inverse: false
|
||||
- id: "self_esteem_8"
|
||||
text: "I feel good about my ability to remember things"
|
||||
inverse: false
|
||||
- id: "self_esteem_9"
|
||||
text: "I have a better memory than most people"
|
||||
inverse: false
|
||||
- id: "self_esteem_10"
|
||||
text: "I have a good memory for recalling trivial information"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "cognitive_selfesteem_memory"
|
||||
reference: "Ward (2013)"
|
||||
|
||||
- name: "cognitive_selfesteem_transactive_memory"
|
||||
label: "cognitive selfesteem transactive memory"
|
||||
items:
|
||||
- id: "self_esteem_11"
|
||||
text: "I know where to look to answer questions I don't know myself"
|
||||
inverse: false
|
||||
- id: "self_esteem_12"
|
||||
text: "When I don't know the answer to a question right away, I know where to find it"
|
||||
inverse: false
|
||||
- id: "self_esteem_13"
|
||||
text: "I know which people to ask when I don't know the answer to a question"
|
||||
inverse: false
|
||||
- id: "self_esteem_14"
|
||||
text: "I have a knack for tracking down information"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "cognitive_selfesteem_transactive_memory"
|
||||
reference: "Ward (2013)"
|
||||
41
config/questionnaires/companionship.yaml
Normal file
41
config/questionnaires/companionship.yaml
Normal file
@ -0,0 +1,41 @@
|
||||
questionnaire: "companionship"
|
||||
scales:
|
||||
- name: "companionship_favorite_ai"
|
||||
label: "Perceived Companionship with Favorite AI"
|
||||
items:
|
||||
- id: "companionship_User_fav_1"
|
||||
text: "During my interactions with (piped fav AI), I spend enjoyable time."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_2"
|
||||
text: "I feel included when I interact with (piped fav AI)."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_3"
|
||||
text: "When I interact with (piped fav AI), it connects with me with things I like."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_4"
|
||||
text: "I share activities with (piped fav AI) for fun."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_5"
|
||||
text: "I have fun when interacting with (piped fav AI)."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_6"
|
||||
text: "(piped fav AI) does things with me that I enjoy."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_7"
|
||||
text: "I spend leisure time interacting with (piped fav AI)."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_8"
|
||||
text: "I do things together with (piped fav AI) that I like."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_9"
|
||||
text: "I participate in activities with (piped fav AI) that make me feel included."
|
||||
inverse: false
|
||||
- id: "companionship_User_fav_10"
|
||||
text: "I spend time with (piped fav AI) in ways that make me feel I belong."
|
||||
inverse: false
|
||||
score_range: [0, 4]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "0 = Never, 1 = Rarely, 2 = Sometimes, 3 = Pretty often, 4 = A lot"
|
||||
output: "companionship_favorite_ai"
|
||||
reference: "self"
|
||||
23
config/questionnaires/concerns_about_loss_of_autonomy.yaml
Normal file
23
config/questionnaires/concerns_about_loss_of_autonomy.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "concerns_about_loss_of_autonomy"
|
||||
scales:
|
||||
- name: "concerns_about_loss_of_autonomy_no_user"
|
||||
label: "Concerns About Loss of Autonomy Due to AI Among Non-Users"
|
||||
items:
|
||||
- id: "loss_auto_noUser_1"
|
||||
text: "I feel that using AI reduces my control over important decisions."
|
||||
inverse: false
|
||||
- id: "loss_auto_noUser_2"
|
||||
text: "I am uncomfortable with machines making choices that affect my daily life."
|
||||
inverse: false
|
||||
- id: "loss_auto_noUser_3"
|
||||
text: "I worry that AI could reduce my ability to act independently."
|
||||
inverse: false
|
||||
- id: "loss_auto_noUser_4"
|
||||
text: "I believe that AI-driven automation could erode personal autonomy over time."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "concerns_about_loss_of_autonomy_no_user"
|
||||
reference: "self"
|
||||
107
config/questionnaires/consequences_ai_use.yaml
Normal file
107
config/questionnaires/consequences_ai_use.yaml
Normal file
@ -0,0 +1,107 @@
|
||||
questionnaire: "consequences_ai_use"
|
||||
scales:
|
||||
- name: "consequences_ai_use_user"
|
||||
label: "Users' perceived consequences of AI use"
|
||||
items:
|
||||
- id: "conseq1_User_1"
|
||||
text: "personal development"
|
||||
inverse: false
|
||||
- id: "conseq1_User_2"
|
||||
text: "wellbeing"
|
||||
inverse: false
|
||||
- id: "conseq1_User_3"
|
||||
text: "personality"
|
||||
inverse: false
|
||||
- id: "conseq1_User_4"
|
||||
text: "friendships"
|
||||
inverse: false
|
||||
- id: "conseq1_User_5"
|
||||
text: "interactions with strangers"
|
||||
inverse: false
|
||||
- id: "conseq1_User_6"
|
||||
text: "interactions with family and friends"
|
||||
inverse: false
|
||||
- id: "conseq1_User_7"
|
||||
text: "relationships with family and friends"
|
||||
inverse: false
|
||||
- id: "conseq2_User_1"
|
||||
text: "feelings toward other people"
|
||||
inverse: false
|
||||
- id: "conseq2_User_2"
|
||||
text: "patience with other people"
|
||||
inverse: false
|
||||
- id: "conseq2_User_3"
|
||||
text: "politeness toward other people"
|
||||
inverse: false
|
||||
- id: "conseq2_User_4"
|
||||
text: "collaboration with colleages"
|
||||
inverse: false
|
||||
- id: "conseq2_User_5"
|
||||
text: "communication skills"
|
||||
inverse: false
|
||||
- id: "conseq2_User_6"
|
||||
text: "problem-solving skills"
|
||||
inverse: false
|
||||
- id: "conseq2_User_7"
|
||||
text: "other skills"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Very harmful, 7 = Very helpful"
|
||||
output: "consequences_ai_use_user"
|
||||
reference: "Guingrich & Graziano (2024), self"
|
||||
retain_single_items: true
|
||||
|
||||
- name: "consequences_ai_use_no_user"
|
||||
label: "Non-users' perceived consequences of AI use"
|
||||
items:
|
||||
- id: "conseq1_noUser_1"
|
||||
text: "personal development"
|
||||
inverse: false
|
||||
- id: "conseq1_noUser_2"
|
||||
text: "wellbeing"
|
||||
inverse: false
|
||||
- id: "conseq1_noUser_3"
|
||||
text: "personality"
|
||||
inverse: false
|
||||
- id: "conseq1_noUser_4"
|
||||
text: "friendships"
|
||||
inverse: false
|
||||
- id: "conseq1_noUser_5"
|
||||
text: "interactions with strangers"
|
||||
inverse: false
|
||||
- id: "conseq1_noUser_6"
|
||||
text: "interactions with family and friends"
|
||||
inverse: false
|
||||
- id: "conseq1_noUser_7"
|
||||
text: "relationships with family and friends"
|
||||
inverse: false
|
||||
- id: "conseq2_noUser_1"
|
||||
text: "feelings toward other people"
|
||||
inverse: false
|
||||
- id: "conseq2_noUser_2"
|
||||
text: "patience with other people"
|
||||
inverse: false
|
||||
- id: "conseq2_noUser_3"
|
||||
text: "politeness toward other people"
|
||||
inverse: false
|
||||
- id: "conseq2_noUser_4"
|
||||
text: "collaboration with colleages"
|
||||
inverse: false
|
||||
- id: "conseq2_noUser_5"
|
||||
text: "communication skills"
|
||||
inverse: false
|
||||
- id: "conseq2_noUser_6"
|
||||
text: "problem-solving skills"
|
||||
inverse: false
|
||||
- id: "conseq2_noUser_7"
|
||||
text: "other skills"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Very harmful, 7 = Very helpful"
|
||||
output: "consequences_ai_use_no_user"
|
||||
reference: "Guingrich & Graziano (2024), self"
|
||||
retain_single_items: true
|
||||
25
config/questionnaires/context_of_use.yaml
Normal file
25
config/questionnaires/context_of_use.yaml
Normal file
@ -0,0 +1,25 @@
|
||||
questionnaire: "context_of_use"
|
||||
scales:
|
||||
- name: "context_of_use_user"
|
||||
label: "Primary Context of Personal AI Usage"
|
||||
items:
|
||||
- id: "contxt"
|
||||
text: "In which context do you use AI systems primarily?"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Personal context"
|
||||
"2": "Professional context"
|
||||
output: "context_of_use_user"
|
||||
reference: "self"
|
||||
|
||||
- name: "context_of_use_no_user"
|
||||
label: "Primary Context of AI System Usage"
|
||||
items:
|
||||
- id: "contxt_noUser"
|
||||
text: "In which context do you see AI systems primarily used?"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Personal context"
|
||||
"2": "Professional context"
|
||||
output: "context_of_use_no_user"
|
||||
reference: "self"
|
||||
52
config/questionnaires/credibility.yaml
Normal file
52
config/questionnaires/credibility.yaml
Normal file
@ -0,0 +1,52 @@
|
||||
questionnaire: "credibility"
|
||||
scales:
|
||||
- name: "credibility_favorite_ai"
|
||||
label: "Perceived credibility of favorite AI system"
|
||||
items:
|
||||
- id: "credibility_User_fav_1"
|
||||
text: "unbelievable - believable"
|
||||
inverse: false
|
||||
- id: "credibility_User_fav_2"
|
||||
text: "inaccurate - accurate"
|
||||
inverse: false
|
||||
- id: "credibility_User_fav_3"
|
||||
text: "untrustworthy - trustworthy"
|
||||
inverse: false
|
||||
- id: "credibility_User_fav_4"
|
||||
text: "biased - unbiased"
|
||||
inverse: false
|
||||
- id: "credibility_User_fav_5"
|
||||
text: "not credible - credible"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "mean"
|
||||
response_options: "1 = agree with left option, 5 = agree with right option"
|
||||
output: "credibility_favorite_ai"
|
||||
reference: "Flanagin & Metzger (2000)"
|
||||
retain_single_items: true
|
||||
|
||||
- name: "credibility_ai_no_user"
|
||||
label: "Perceived credibility (no user)"
|
||||
items:
|
||||
- id: "credibility_noUser_1"
|
||||
text: "unbelievable - believable"
|
||||
inverse: false
|
||||
- id: "credibility_noUser_2"
|
||||
text: "inaccurate - accurate"
|
||||
inverse: false
|
||||
- id: "credibility_noUser_3"
|
||||
text: "untrustworthy - trustworthy"
|
||||
inverse: false
|
||||
- id: "credibility_noUser_4"
|
||||
text: "biased - unbiased"
|
||||
inverse: false
|
||||
- id: "credibility_noUser_5"
|
||||
text: "not credible - credible"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "mean"
|
||||
response_options: "1 = agree with left option, 5 = agree with right option"
|
||||
output: "credibility_ai_no_user"
|
||||
reference: "Flanagin & Metzger (2000)"
|
||||
81
config/questionnaires/creepiness.yaml
Normal file
81
config/questionnaires/creepiness.yaml
Normal file
@ -0,0 +1,81 @@
|
||||
questionnaire: "creepiness"
|
||||
scales:
|
||||
- name: "creepiness_favorite_ai_user"
|
||||
label: "Perceived Creepiness Favorite AI"
|
||||
items:
|
||||
- id: "creepy_User_fav_1"
|
||||
text: "I have a queasy feeling."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_2"
|
||||
text: "I have a feeling that there is something shady."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_3"
|
||||
text: "I feel uneasy."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_4"
|
||||
text: "I have an indefinable fear."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_5"
|
||||
text: "This interaction somehow feels threatening."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_6"
|
||||
text: "I don't know how to judge the interaction with (piped fav AI)."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_7"
|
||||
text: "I don't know exactly what is happening to me."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_8"
|
||||
text: "Things are going on that I don't understand."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_9"
|
||||
text: "I don't know exactly how to behave."
|
||||
inverse: false
|
||||
- id: "creepy_User_fav_10"
|
||||
text: "I do not know exactly what to expect of this interaction."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neutral, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "creepiness_favorite_ai_user"
|
||||
reference: "Langer & König (2018); https://doi.org/10.3389/fpsyg.2018.02220"
|
||||
|
||||
- name: "creepiness_ai_no_user"
|
||||
label: "Perceived Creepiness AI"
|
||||
items:
|
||||
- id: "creepy_noUser_1"
|
||||
text: "I have a queasy feeling."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_2"
|
||||
text: "I have a feeling that there is something shady."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_3"
|
||||
text: "I feel uneasy."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_4"
|
||||
text: "I have an indefinable fear."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_5"
|
||||
text: "This interaction somehow feels threatening."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_6"
|
||||
text: "I don't know how to judge the interaction with (piped fav AI)."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_7"
|
||||
text: "I don't know exactly what is happening to me."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_8"
|
||||
text: "Things are going on that I don't understand."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_9"
|
||||
text: "I don't know exactly how to behave."
|
||||
inverse: false
|
||||
- id: "creepy_noUser_10"
|
||||
text: "I do not know exactly what to expect of this interaction."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neutral, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "creepiness_ai_no_user"
|
||||
reference: "Langer & König (2018); https://doi.org/10.3389/fpsyg.2018.02220"
|
||||
32
config/questionnaires/delegation_comfort.yaml
Normal file
32
config/questionnaires/delegation_comfort.yaml
Normal file
@ -0,0 +1,32 @@
|
||||
questionnaire: "delegation_comfort"
|
||||
scales:
|
||||
- name: "delegation_comfort"
|
||||
label: "Comfort Level in Delegating Different Task Types to AI"
|
||||
items:
|
||||
- id: "delg_tsk_typs_1"
|
||||
text: "Content Creation (including writing): generating written, visual or audio content from scratch. Examples: writing texts, composing music, creating graphics."
|
||||
inverse: false
|
||||
- id: "delg_tsk_typs_2"
|
||||
text: "Content Creation (generating new ideas or concepts): Examples: brainstorming, idea generation for stories or projects, artistic inspiration."
|
||||
inverse: false
|
||||
- id: "delg_tsk_typs_3"
|
||||
text: "Information Search (search engine replacement, looking up some facts)."
|
||||
inverse: false
|
||||
- id: "delg_tsk_typs_4"
|
||||
text: "Advice and Recommendation (providing suggestions or guidance based on input data. Examples: product recommendations, lifestyle advice, travel suggestions)."
|
||||
inverse: false
|
||||
- id: "delg_tsk_typs_5"
|
||||
text: "Explanations and Learning (offering detailed explanations or educational content. Examples: explaining complex concepts, tutoring in various subjects, answering questions)."
|
||||
inverse: false
|
||||
- id: "delg_tsk_typs_6"
|
||||
text: "Analysis and Processing (conducting analytical tasks or data processing. Examples: coding assistance, data analysis, statistical computations, sentiment analysis)."
|
||||
inverse: false
|
||||
- id: "delg_tsk_typs_7"
|
||||
text: "Automation and Productivity (automating routine tasks to improve efficiency. Examples: scheduling, reminders, managing emails, automating workflows)."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Extremely uncomfortable, 2 = Somewhat uncomfortable, 3 = Neither comfortable nor uncomfortable, 4 = Somewhat comfortable, 5 = Extremely comfortable"
|
||||
output: "delegation_comfort"
|
||||
reference: "self"
|
||||
64
config/questionnaires/demographics.yaml
Normal file
64
config/questionnaires/demographics.yaml
Normal file
@ -0,0 +1,64 @@
|
||||
questionnaire: "demographics"
|
||||
scales:
|
||||
- name: "age"
|
||||
label: "Age of participants (numerical value 18-110)"
|
||||
items:
|
||||
- id: "age"
|
||||
text: "age"
|
||||
score_range: [18, 110]
|
||||
format: "numeric"
|
||||
calculation: "response"
|
||||
response_options: "numeric age"
|
||||
output: "age"
|
||||
reference: "self"
|
||||
|
||||
- name: "gender"
|
||||
label: "Gender"
|
||||
items:
|
||||
- id: "gender"
|
||||
text: "Please indicate your gender"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Male"
|
||||
"2": "Female"
|
||||
"3": "Non-binary / third gender"
|
||||
"4": "Prefer not to say"
|
||||
missing_response_option: ["4"]
|
||||
output: "gender"
|
||||
reference: "self"
|
||||
|
||||
- name: "education"
|
||||
label: "Highest Level of Education Completed"
|
||||
items:
|
||||
- id: "education"
|
||||
text: "What is the highest level of education you have completed?"
|
||||
calculation: "ordinal"
|
||||
response_options:
|
||||
"1": "Some high school or less"
|
||||
"2": "High school diploma or GED"
|
||||
"3": "Some college, but no degree"
|
||||
"4": "Associates or technical degree"
|
||||
"5": "Bachelor’s degree"
|
||||
"6": "Graduate or professional degree (MA, MS, MBA, PhD, JD, MD, DDS etc.)"
|
||||
"7": "Prefer not to say"
|
||||
missing_response_option: ["7"]
|
||||
output: "education"
|
||||
reference: "self"
|
||||
|
||||
- name: "income"
|
||||
label: "Total Household Income Before Taxes (Past 12 Months)"
|
||||
items:
|
||||
- id: "income"
|
||||
text: "What was your total household income before taxes during the past 12 months?"
|
||||
calculation: "ordinal"
|
||||
response_options:
|
||||
"1": "Less than $25,000"
|
||||
"2": "$25,000-$49,999"
|
||||
"3": "$50,000-$74,999"
|
||||
"4": "$75,000-$99,999"
|
||||
"5": "$100,000-$149,999"
|
||||
"6": "$150,000 or more"
|
||||
"7": "Prefer not to say"
|
||||
missing_response_option: ["7"]
|
||||
output: "income"
|
||||
reference: "self"
|
||||
23
config/questionnaires/distrust_toward_ai_corporations.yaml
Normal file
23
config/questionnaires/distrust_toward_ai_corporations.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "distrust_toward_ai_corporations"
|
||||
scales:
|
||||
- name: "distrust_toward_ai_corporations_no_user"
|
||||
label: "Distrust of Corporations Regarding AI Among Non-Users"
|
||||
items:
|
||||
- id: "distrust_corp_noUser_1"
|
||||
text: "I am uneasy about a few big AI companies having too much influence."
|
||||
inverse: false
|
||||
- id: "distrust_corp_noUser_2"
|
||||
text: "I worry that the growth of AI will concentrate power in major corporations."
|
||||
inverse: false
|
||||
- id: "distrust_corp_noUser_3"
|
||||
text: "I do not trust large tech firms to use AI responsibly for everyone."
|
||||
inverse: false
|
||||
- id: "distrust_corp_noUser_4"
|
||||
text: "I believe that current AI development favors corporate interests over individual rights."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "distrust_toward_ai_corporations_no_user"
|
||||
reference: "self"
|
||||
23
config/questionnaires/ecological_concerns.yaml
Normal file
23
config/questionnaires/ecological_concerns.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "ecological_concerns"
|
||||
scales:
|
||||
- name: "ecological_concerns_no_user"
|
||||
label: "Environmental and Ethical Concerns About AI Among Non-Users"
|
||||
items:
|
||||
- id: "eco_concern_noUser_1"
|
||||
text: "I am concerned that the widespread adoption of AI could have a negative impact on the environment."
|
||||
inverse: false
|
||||
- id: "eco_concern_noUser_2"
|
||||
text: "I worry that AI development sometimes involves practices that are ethically questionable."
|
||||
inverse: false
|
||||
- id: "eco_concern_noUser_3"
|
||||
text: "I believe that AI systems may contribute to ecological degradation."
|
||||
inverse: false
|
||||
- id: "eco_concern_noUser_4"
|
||||
text: "I feel that the potential ethical issues associated with AI outweigh its benefits."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "ecological_concerns_no_user"
|
||||
reference: "self"
|
||||
47
config/questionnaires/effect_on_behavior_toward_people.yaml
Normal file
47
config/questionnaires/effect_on_behavior_toward_people.yaml
Normal file
@ -0,0 +1,47 @@
|
||||
questionnaire: "effect_on_behavior_toward_people"
|
||||
scales:
|
||||
- name: "effect_on_behavior_toward_people_user"
|
||||
label: "Perceived Effect of AI Interaction on Behavior Toward People"
|
||||
items:
|
||||
- id: "effectpeople_User_1"
|
||||
text: "The way I act toward AI affects how I interact with people."
|
||||
inverse: false
|
||||
- id: "effectpeople_User_2"
|
||||
text: "Being polite to AI makes me more polite to others."
|
||||
inverse: false
|
||||
- id: "effectpeople_User_3"
|
||||
text: "I talk to other people the same way I talk to AI."
|
||||
inverse: false
|
||||
- id: "effectpeople_User_4"
|
||||
text: "I treat other people the same way I treat AI."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "effect_on_behavior_toward_people_user"
|
||||
reference: "self"
|
||||
retain_single_items: true
|
||||
|
||||
- name: "effect_on_behavior_toward_people_no_user"
|
||||
label: "Perceived Potential Effect of AI Interaction on Behavior Toward People (Non-Users)"
|
||||
items:
|
||||
- id: "effectpeople_noUser_1"
|
||||
text: "The way I act toward AI would affect how I interact with people."
|
||||
inverse: false
|
||||
- id: "effectpeople_noUser_2"
|
||||
text: "Being polite to AI would make me more polite to others."
|
||||
inverse: false
|
||||
- id: "effectpeople_noUser_3"
|
||||
text: "I would talk to other people the same way I talk to AI."
|
||||
inverse: false
|
||||
- id: "effectpeople_noUser_4"
|
||||
text: "I would treat other people the same way I treat AI."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "effect_on_behavior_toward_people_no_user"
|
||||
reference: "self"
|
||||
|
||||
23
config/questionnaires/effects_on_work.yaml
Normal file
23
config/questionnaires/effects_on_work.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "effects on work"
|
||||
scales:
|
||||
- name: "effects_on_work"
|
||||
label: "Effects on work"
|
||||
items:
|
||||
- id: "effects_on_work_1"
|
||||
text: "Using AI has increased my efficiency."
|
||||
inverse: false
|
||||
- id: "effects_on_work_2"
|
||||
text: "Using AI has increased the quality of my work."
|
||||
inverse: false
|
||||
- id: "effects_on_work_3"
|
||||
text: "Using AI has increased my productivity."
|
||||
inverse: false
|
||||
- id: "effects_on_work_4"
|
||||
text: "Using AI has given me time and capacity to focus on other tasks."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 5 = strongly agree"
|
||||
output: "effects_on_work"
|
||||
reference: "self"
|
||||
21
config/questionnaires/enjoyment.yaml
Normal file
21
config/questionnaires/enjoyment.yaml
Normal file
@ -0,0 +1,21 @@
|
||||
questionnaire: "enjoyment"
|
||||
scales:
|
||||
- name: "enjoyment_favorite_ai_user"
|
||||
label: "Enjoyment of favorite AI system"
|
||||
items:
|
||||
- id: "enjoyment_User_fav_1"
|
||||
text: "I find using [favorite AI] to be enjoyable."
|
||||
inverse: false
|
||||
- id: "enjoyment_User_fav_2"
|
||||
text: "The actual process of using [favorite AI] is pleasant."
|
||||
inverse: false
|
||||
- id: "enjoyment_User_fav_3"
|
||||
text: "I have fun using [favorite AI]."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "enjoyment_favorite_ai_user"
|
||||
reference: "Venkatesh (2000)"
|
||||
retain_single_items: true
|
||||
14
config/questionnaires/ethical_concerns_delegation.yaml
Normal file
14
config/questionnaires/ethical_concerns_delegation.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
questionnaire: "ethical_concerns_delegation"
|
||||
scales:
|
||||
- name: "ethical_concerns_delegation"
|
||||
label: "Concern About Ethical Implications of Delegating Tasks to AI"
|
||||
items:
|
||||
- id: "cncrnd_ethic_delg"
|
||||
text: "How concerned are you about the moral or ethical implications of delegating important tasks to AI?"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "response"
|
||||
response_options: "1 = Not at all concerned, 2 = A little bit concerned, 3 = Concerned, 4 = Strongly concerned, 5 = Extremely concerned"
|
||||
output: "ethical_concerns_delegation"
|
||||
reference: "self"
|
||||
23
config/questionnaires/ethical_concerns_general.yaml
Normal file
23
config/questionnaires/ethical_concerns_general.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "ethical_concerns_general"
|
||||
scales:
|
||||
- name: "ethical_concerns_general_no_user"
|
||||
label: "Ethical Concerns About AI Among Non-Users"
|
||||
items:
|
||||
- id: "ethical_impl_noUser_1"
|
||||
text: "I worry that AI could be used to unfairly shape public opinion."
|
||||
inverse: false
|
||||
- id: "ethical_impl_noUser_2"
|
||||
text: "I worry that relying more on AI could worsen social inequality."
|
||||
inverse: false
|
||||
- id: "ethical_impl_noUser_3"
|
||||
text: "I believe that AI often overlooks ethical issues in its design."
|
||||
inverse: false
|
||||
- id: "ethical_impl_noUser_4"
|
||||
text: "I feel that the social risks of AI outweigh its benefits."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "ethical_concerns_general_no_user"
|
||||
reference: "self"
|
||||
43
config/questionnaires/favorite_ai.yaml
Normal file
43
config/questionnaires/favorite_ai.yaml
Normal file
@ -0,0 +1,43 @@
|
||||
questionnaire: "favorite_ai"
|
||||
scales:
|
||||
- name: "choice_favorite_ai_user"
|
||||
label: "Choice of favorite AI system"
|
||||
items:
|
||||
- id: "choose_favAI_User"
|
||||
text: "Which AI system is your favorite?"
|
||||
open_ended_id: "choose_favAI_gnr_10_TEXT" # optional to also capture open-ended responses
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "ChatGPT"
|
||||
"2": "Microsoft Copilot (formerly Bing Chat)"
|
||||
"3": "Claude"
|
||||
"4": "Google AI incl. Google Gemini (formerly Bard)"
|
||||
"5": "Alexa"
|
||||
"6": "Siri/Apple Intelligence"
|
||||
"7": "Samsung Galaxy AI"
|
||||
"8": "Twitter/X Grok"
|
||||
"9": "Meta AI"
|
||||
"10": "Other"
|
||||
output: "choice_favorite_ai_user"
|
||||
reference: "self"
|
||||
|
||||
- name: "choice_favorite_ai_no_user"
|
||||
label: "Choice of favorite AI system"
|
||||
items:
|
||||
- id: "choose_favAI_noUser"
|
||||
text: "Which AI system is your favorite?"
|
||||
open_ended_id: "choose_favAI_noUser_10_TEXT"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "ChatGPT"
|
||||
"2": "Microsoft Copilot (formerly Bing Chat)"
|
||||
"3": "Claude"
|
||||
"4": "Google AI incl. Google Gemini (formerly Bard)"
|
||||
"5": "Alexa"
|
||||
"6": "Siri/Apple Intelligence"
|
||||
"7": "Samsung Galaxy AI"
|
||||
"8": "Twitter/X Grok"
|
||||
"9": "Meta AI"
|
||||
"10": "Other"
|
||||
output: "choice_favorite_ai_no_user"
|
||||
reference: "self"
|
||||
19
config/questionnaires/general_experience_ai.yaml
Normal file
19
config/questionnaires/general_experience_ai.yaml
Normal file
@ -0,0 +1,19 @@
|
||||
questionnaire: "general_experience_ai"
|
||||
scales:
|
||||
- name: "general_experience_ai"
|
||||
label: "General experience with AI systems"
|
||||
items:
|
||||
- id: "exp"
|
||||
text: "General experience with AI systems"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "ordinal"
|
||||
calculation: "response"
|
||||
response_options:
|
||||
"1": "No Experience"
|
||||
"2": "Minimal Experience"
|
||||
"3": "Moderate Experience"
|
||||
"4": "Strong Experience"
|
||||
"5": "Extensive Experience"
|
||||
output: "general_experience_ai"
|
||||
reference: "self"
|
||||
104
config/questionnaires/generalized_mind_perception.yaml
Normal file
104
config/questionnaires/generalized_mind_perception.yaml
Normal file
@ -0,0 +1,104 @@
|
||||
questionnaire: "generalized_mind_perception"
|
||||
scales:
|
||||
- name: "generalized_mind_perception_favorite_ai"
|
||||
label: "Users' generalized mind perception of favorite AI system"
|
||||
items:
|
||||
- id: "mindperc_User_fav_1"
|
||||
text: "can feel happy."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_2"
|
||||
text: "can love specific people."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_3"
|
||||
text: "can feel pleasure."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_4"
|
||||
text: "can experience gratitude."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_5"
|
||||
text: "can feel pain."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_6"
|
||||
text: "can feel stress."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_7"
|
||||
text: "can experience fear."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_8"
|
||||
text: "can feel tired."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_1"
|
||||
text: "has a sense for what is right and wrong."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_5"
|
||||
text: "behaves according to moral rules."
|
||||
inverse: false
|
||||
- id: "agency_User_fav_1"
|
||||
text: "can create new goals."
|
||||
inverse: false
|
||||
- id: "agency_User_fav_2"
|
||||
text: "can communicate with people."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_9"
|
||||
text: "can hear and see the world"
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_10"
|
||||
text: "can learn from instruction."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 5 = Strongly agree"
|
||||
output: "generalized_mind_perception_favorite_ai"
|
||||
reference: "Malle (2019), Banks (2019)"
|
||||
|
||||
- name: "generalized_mind_perception_no_user"
|
||||
label: "Non-users' generalized mind perception of favorite AI system"
|
||||
items:
|
||||
- id: "mindper_noUser_1"
|
||||
text: "can feel happy."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_2"
|
||||
text: "can love specific people."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_3"
|
||||
text: "can feel pleasure."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_4"
|
||||
text: "can experience gratitude."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_5"
|
||||
text: "can feel pain."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_6"
|
||||
text: "can feel stress."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_7"
|
||||
text: "can experience fear."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_8"
|
||||
text: "can feel tired."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_1"
|
||||
text: "has a sense for what is right and wrong."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_5"
|
||||
text: "behaves according to moral rules."
|
||||
- id: "agency_noUser_1"
|
||||
text: "can create new goals."
|
||||
inverse: false
|
||||
- id: "agency_noUser_2"
|
||||
text: "can communicate with people."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_9"
|
||||
text: "can hear and see the world"
|
||||
inverse: false
|
||||
- id: "mindper_noUser_10"
|
||||
text: "can learn from instruction."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 5 = Strongly agree"
|
||||
output: "generalized_mind_perception_no_user"
|
||||
reference: "Malle (2019), Banks (2019)"
|
||||
23
config/questionnaires/hope_and_concern.yaml
Normal file
23
config/questionnaires/hope_and_concern.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "hope_and_concern"
|
||||
scales:
|
||||
- name: "hope_and_concern"
|
||||
label: "Hopes and Concerns About AI"
|
||||
items:
|
||||
- id: "hopeconcern_1"
|
||||
text: "I hope AI will make life easier for most people."
|
||||
inverse: false
|
||||
- id: "hopeconcern_2"
|
||||
text: "I hope AI will solve important global problems."
|
||||
inverse: false
|
||||
- id: "hopeconcern_3"
|
||||
text: "I worry AI will take too many jobs."
|
||||
inverse: true
|
||||
- id: "hopeconcern_4"
|
||||
text: "I worry AI will increase social inequalities."
|
||||
inverse: true
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "hope_and_concern"
|
||||
reference: "self"
|
||||
46
config/questionnaires/impact_in_general_on_skills.yaml
Normal file
46
config/questionnaires/impact_in_general_on_skills.yaml
Normal file
@ -0,0 +1,46 @@
|
||||
questionnaire: "impact_in_general_on_skills"
|
||||
scales:
|
||||
- name: "impact_in_general_on_skills_user"
|
||||
label: "Perceived Impact of AI on Personal Skill Development"
|
||||
items:
|
||||
- id: "skills_User_1"
|
||||
text: "I believe AI will make me less skilled at tasks I currently do."
|
||||
inverse: false
|
||||
- id: "skills_User_2"
|
||||
text: "I believe AI will help me learn new things."
|
||||
inverse: false
|
||||
- id: "skills_User_3"
|
||||
text: "AI makes me rely less on my own abilities."
|
||||
inverse: false
|
||||
- id: "skills_User_4"
|
||||
text: "AI helps me develop new skills."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "impact_in_general_on_skills_user"
|
||||
reference: "self"
|
||||
|
||||
- name: "impact_in_general_on_skills_no_user"
|
||||
label: "Perceived Potential Impact of AI on Personal Skills (Non-Users)"
|
||||
items:
|
||||
- id: "skills_noUser_1"
|
||||
text: "I believe AI would make me less skilled at tasks I currently do."
|
||||
inverse: false
|
||||
- id: "skills_noUser_2"
|
||||
text: "I believe AI would help me learn new things."
|
||||
inverse: false
|
||||
- id: "skills_noUser_3"
|
||||
text: "AI would make me rely less on my own abilities."
|
||||
inverse: false
|
||||
- id: "skills_noUser_4"
|
||||
text: "AI would help me develop new skills."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "impact_in_general_on_skills_no_user"
|
||||
reference: "self"
|
||||
|
||||
18
config/questionnaires/impact_of_delegation_on_skills.yaml
Normal file
18
config/questionnaires/impact_of_delegation_on_skills.yaml
Normal file
@ -0,0 +1,18 @@
|
||||
questionnaire: "impact_of_delegation_on_skills"
|
||||
scales:
|
||||
- name: "impact_of_delegation_on_skills"
|
||||
label: "Perceived Impact of Letting AI Perform Tasks on Personal Skills"
|
||||
items:
|
||||
- id: "letAI_tsk_impct_skil"
|
||||
text: "Do you think letting AI do more tasks for you has improved or reduced your own skills?"
|
||||
calculation: "ordinal"
|
||||
response_options:
|
||||
"1": "Strongly reduced"
|
||||
"2": "Somewhat reduced"
|
||||
"3": "No impact"
|
||||
"4": "Somewhat improved"
|
||||
"5": "Strongly improved"
|
||||
"6": "I do not let AI make any tasks for me"
|
||||
output: "impact_of_delegation_on_skills"
|
||||
missing_response_option: ["6"]
|
||||
reference: "self"
|
||||
39
config/questionnaires/intention_usage.yaml
Normal file
39
config/questionnaires/intention_usage.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
questionnaire: "intention_usage"
|
||||
scales:
|
||||
- name: "intention_use_favorite_ai"
|
||||
label: "Intention to use favorite AI system"
|
||||
items:
|
||||
- id: "int_use_bhvr_User_fav_1"
|
||||
text: "I intend to use [favorite AI] in the next 2 months."
|
||||
inverse: false
|
||||
- id: "int_use_bhvr_User_fav_2"
|
||||
text: "I predict I would use [favorite AI] in the next 2 months."
|
||||
inverse: false
|
||||
- id: "int_use_bhvr_User_fav_3"
|
||||
text: "I plan to use [favorite AI] in the next 2 months."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 5 = strongly agree"
|
||||
output: "intention_use_favorite_ai"
|
||||
reference: "Venkatesh et al., 2003"
|
||||
|
||||
- name: "intention_use_no_user"
|
||||
label: "Intention to use AI systems (no user)"
|
||||
items:
|
||||
- id: "int_use_bhvr_noUser_1"
|
||||
text: "I intend to use an AI in the next 2 months."
|
||||
inverse: false
|
||||
- id: "int_use_bhvr_noUser_2"
|
||||
text: "I predict I would use an AI in the next 2 months."
|
||||
inverse: false
|
||||
- id: "int_use_bhvr_noUser_3"
|
||||
text: "I plan to use an AI in the next 2 months."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 5 = strongly agree"
|
||||
output: "intention_use_no_user"
|
||||
reference: "Venkatesh et al., 2003"
|
||||
157
config/questionnaires/knowledge.yaml
Normal file
157
config/questionnaires/knowledge.yaml
Normal file
@ -0,0 +1,157 @@
|
||||
questionnaire: "knowledge"
|
||||
label: "Knowledge about AI"
|
||||
scales:
|
||||
|
||||
- name: "subjective_knowledge"
|
||||
label: "Subjective knowledge about AI"
|
||||
items:
|
||||
- id: "subj_know"
|
||||
text: "How would you rate your knowledge about AI?"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "response"
|
||||
response_options: "1 = very low, 5 = very high"
|
||||
output: "subjective_knowledge"
|
||||
reference: "self"
|
||||
|
||||
- name: "predicted_knowledge"
|
||||
label: "Predicted knowledge (self-estimate in percent)"
|
||||
items:
|
||||
- id: "predict_know_1"
|
||||
text: "What percent of these knowledge questions do you expect to answer correctly?"
|
||||
inverse: false
|
||||
score_range: [0, 100]
|
||||
format: "percent"
|
||||
calculation: "response"
|
||||
response_options: "0-100%"
|
||||
output: "predicted_knowledge"
|
||||
reference: "self"
|
||||
|
||||
- name: "objective_knowledge"
|
||||
label: "Objective AI knowledge (18 factual items)"
|
||||
items:
|
||||
- id: "obj_know_1_1"
|
||||
text: "LLMs are trained with a large amount of text data (e.g., internet, social media)."
|
||||
correct: 1
|
||||
- id: "obj_know_1_2"
|
||||
text: "LLMs calculate for their answers which word is most likely to come next."
|
||||
correct: 1
|
||||
- id: "obj_know_1_3"
|
||||
text: "The responses of LLMs may be biased (e.g., racially) based on the data they were trained on."
|
||||
correct: 1
|
||||
- id: "obj_know_1_4"
|
||||
text: "The statements of LLMs are always correct."
|
||||
correct: 2
|
||||
- id: "obj_know_1_5"
|
||||
text: "Humans can still easily recognize AI-generated speech as artificial speech."
|
||||
correct: 2
|
||||
- id: "obj_know_1_6"
|
||||
text: "LLMs can intentionally lie and spread false information."
|
||||
correct: 2
|
||||
- id: "obj_know_1_7"
|
||||
text: "Humans can answer questions about a text better than LLMs."
|
||||
correct: 2
|
||||
- id: "obj_know_1_8"
|
||||
text: "LLMs have learned to understand language like a human."
|
||||
correct: 2
|
||||
- id: "obj_know_1_9"
|
||||
text: "LLMs have no real understanding of what they write."
|
||||
correct: 1
|
||||
- id: "obj_know_1_10"
|
||||
text: "In machine learning, two common groups of strategies to train algorithms are supervised and unsupervised learning."
|
||||
correct: 1
|
||||
- id: "obj_know_1_11"
|
||||
text: "Artificial neural networks attempt to fully replicate neural networks in the brain."
|
||||
correct: 2
|
||||
- id: "obj_know_1_12"
|
||||
text: "Using AI, videos can be created that are indistinguishable from videos created by real people."
|
||||
correct: 1
|
||||
- id: "obj_know_1_13"
|
||||
text: "A strong AI can make decisions on its own."
|
||||
correct: 1
|
||||
- id: "obj_know_1_14"
|
||||
text: "Machine learning is based on statistical principles."
|
||||
correct: 1
|
||||
- id: "obj_know_1_15"
|
||||
text: "A chatbot can correctly answer the question 'Will it rain tomorrow?' with a high probability."
|
||||
correct: 1
|
||||
- id: "obj_know_1_16"
|
||||
text: "The language understanding of AI systems does not yet reach that of humans."
|
||||
correct: 1
|
||||
- id: "obj_know_1_17"
|
||||
text: "The automatic generation of texts has already been used for years in journalism and e-commerce, for example."
|
||||
correct: 1
|
||||
- id: "obj_know_1_18"
|
||||
text: "Content created by AI must be legally marked as such."
|
||||
correct: 2
|
||||
score_range: [1, 2]
|
||||
calculation: "sum_correct"
|
||||
response_options: "1 = TRUE, 2 = FALSE (participant answer is scored as correct if it matches 'correct')"
|
||||
output: "objective_knowledge"
|
||||
reference: "Adapted from Said et al., 2022 and Lermann Henestrosa & Kimmerle, 2024"
|
||||
retain_single_items: true
|
||||
|
||||
- name: "objective_knowledge_confidence"
|
||||
label: "Confidence in objective knowledge about AI"
|
||||
items:
|
||||
- id: "obj_know_2_1"
|
||||
text: "LLMs are trained with a large amount of text data (e.g., internet, social media)."
|
||||
inverse: false
|
||||
- id: "obj_know_2_2"
|
||||
text: "LLMs calculate for their answers which word is most likely to come next."
|
||||
inverse: false
|
||||
- id: "obj_know_2_3"
|
||||
text: "The responses of LLMs may be biased (e.g., racially) based on the data they were trained on."
|
||||
inverse: false
|
||||
- id: "obj_know_2_4"
|
||||
text: "The statements of LLMs are always correct."
|
||||
inverse: false
|
||||
- id: "obj_know_2_5"
|
||||
text: "Humans can still easily recognize AI-generated speech as artificial speech."
|
||||
inverse: false
|
||||
- id: "obj_know_2_6"
|
||||
text: "LLMs can intentionally lie and spread false information."
|
||||
inverse: false
|
||||
- id: "obj_know_2_7"
|
||||
text: "Humans can answer questions about a text better than LLMs."
|
||||
inverse: false
|
||||
- id: "obj_know_2_8"
|
||||
text: "LLMs have learned to understand language like a human."
|
||||
inverse: false
|
||||
- id: "obj_know_2_9"
|
||||
text: "LLMs have no real understanding of what they write."
|
||||
inverse: false
|
||||
- id: "obj_know_2_10"
|
||||
text: "In machine learning, two common groups of strategies to train algorithms are supervised and unsupervised learning."
|
||||
inverse: false
|
||||
- id: "obj_know_2_11"
|
||||
text: "Artificial neural networks attempt to fully replicate neural networks in the brain."
|
||||
inverse: false
|
||||
- id: "obj_know_2_12"
|
||||
text: "Using AI, videos can be created that are indistinguishable from videos created by real people."
|
||||
inverse: false
|
||||
- id: "obj_know_2_13"
|
||||
text: "A strong AI can make decisions on its own."
|
||||
inverse: false
|
||||
- id: "obj_know_2_14"
|
||||
text: "Machine learning is based on statistical principles."
|
||||
inverse: false
|
||||
- id: "obj_know_2_15"
|
||||
text: "A chatbot can correctly answer the question 'Will it rain tomorrow?' with a high probability."
|
||||
inverse: false
|
||||
- id: "obj_know_2_16"
|
||||
text: "The language understanding of AI systems does not yet reach that of humans."
|
||||
inverse: false
|
||||
- id: "obj_know_2_17"
|
||||
text: "The automatic generation of texts has already been used for years in journalism and e-commerce, for example."
|
||||
inverse: false
|
||||
- id: "obj_know_2_18"
|
||||
text: "Content created by AI must be legally marked as such."
|
||||
inverse: false
|
||||
score_range: [1, 6]
|
||||
format: "Confidence scale"
|
||||
calculation: "mean"
|
||||
response_options: "1 = I guessed-50%, 2 = 60%, 3 = 70%, 4 = 80%, 5 = 90%, 6 = I am sure-100%"
|
||||
output: "objective_knowledge_confidence"
|
||||
reference: "Adapted from Said et al., 2022 and Lermann Henestrosa & Kimmerle, 2024"
|
||||
13
config/questionnaires/knowledge_how_to_start_using_ai.yaml
Normal file
13
config/questionnaires/knowledge_how_to_start_using_ai.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
questionnaire: "knowledge_how_to_start_using_ai"
|
||||
scales:
|
||||
- name: "knowledge_how_to_start_using_ai_no_user"
|
||||
label: "Knowledge of How to Start Using AI"
|
||||
items:
|
||||
- id: "know_whr_start_noUser"
|
||||
text: "If you would be prompted to now start using an AI system - would you know where to start and what to do?"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Yes"
|
||||
"2": "No, not really"
|
||||
output: "knowledge_how_to_start_using_ai_no_user"
|
||||
reference: "self"
|
||||
23
config/questionnaires/lack_of_fomo.yaml
Normal file
23
config/questionnaires/lack_of_fomo.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "lack_of_fomo"
|
||||
scales:
|
||||
- name: "lack_of_fomo_no_user"
|
||||
label: "Lack of Fear of Missing Out on AI Among Non-Users"
|
||||
items:
|
||||
- id: "noFOMO_noUser_1"
|
||||
text: "I do not worry about missing out on opportunities by not using AI."
|
||||
inverse: false
|
||||
- id: "noFOMO_noUser_2"
|
||||
text: "I feel secure in my current technological approach even though many others are adopting AI."
|
||||
inverse: false
|
||||
- id: "noFOMO_noUser_3"
|
||||
text: "I rarely feel pressured to keep up with the latest AI trends."
|
||||
inverse: false
|
||||
- id: "noFOMO_noUser_4"
|
||||
text: "I am content with the technology I use and do not feel left behind by AI advancements."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "lack_of_fomo_no_user"
|
||||
reference: "self"
|
||||
20
config/questionnaires/loneliness.yaml
Normal file
20
config/questionnaires/loneliness.yaml
Normal file
@ -0,0 +1,20 @@
|
||||
questionnaire: "loneliness"
|
||||
scales:
|
||||
- name: "loneliness"
|
||||
label: "Perceived Loneliness"
|
||||
items:
|
||||
- id: "loneliness_1"
|
||||
text: "... that you lack companionship?"
|
||||
inverse: false
|
||||
- id: "loneliness_2"
|
||||
text: "... left out?"
|
||||
inverse: false
|
||||
- id: "loneliness_3"
|
||||
text: "... isolated from others?"
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = hardly ever, 2 = some of the time, 3 = often"
|
||||
output: "loneliness"
|
||||
reference: "Hughes et al. (2008)"
|
||||
101
config/questionnaires/machine_heuristic.yaml
Normal file
101
config/questionnaires/machine_heuristic.yaml
Normal file
@ -0,0 +1,101 @@
|
||||
questionnaire: "machine heuristic"
|
||||
scales:
|
||||
- name: "machine_heuristic_1_favorite_ai"
|
||||
label: "Belief in the machine heuristic for favorite AI System (Set 1)"
|
||||
items:
|
||||
- id: "macheu_User_fav_1"
|
||||
text: "expert"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_2"
|
||||
text: "efficient"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_3"
|
||||
text: "rigid"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_4"
|
||||
text: "fair"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_5"
|
||||
text: "complex"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_6"
|
||||
text: "superfluous"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "machine_heuristic_1_favorite_ai"
|
||||
reference: "Yang & Sundar, 2024"
|
||||
|
||||
- name: "machine_heuristic_1_no_user"
|
||||
label: "Belief in the machine heuristic for AI systems (no user) (Set 1)"
|
||||
items:
|
||||
- id: "macheu_noUser_1"
|
||||
text: "expert"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_2"
|
||||
text: "efficient"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_3"
|
||||
text: "rigid"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_4"
|
||||
text: "fair"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_5"
|
||||
text: "complex"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_6"
|
||||
text: "superfluous"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "machine_heuristic_1_no_user"
|
||||
reference: "Yang & Sundar, 2024"
|
||||
|
||||
- name: "machine_heuristic_2_favorite_ai"
|
||||
label: "Belief in the machine heuristic for favorite AI System (Set 2)"
|
||||
items:
|
||||
- id: "macheu_User_fav_7"
|
||||
text: "neutral"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_8"
|
||||
text: "unbiased"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_9"
|
||||
text: "objective"
|
||||
inverse: false
|
||||
- id: "macheu_User_fav_10"
|
||||
text: "accurate"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "machine_heuristic_2_favorite_ai"
|
||||
reference: "Sundar & Kim, 2019"
|
||||
|
||||
- name: "machine_heuristic_2_no_user"
|
||||
label: "Belief in the machine heuristic for AI systems (no user) (Set 2)"
|
||||
items:
|
||||
- id: "macheu_noUser_7"
|
||||
text: "neutral"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_8"
|
||||
text: "unbiased"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_9"
|
||||
text: "objective"
|
||||
inverse: false
|
||||
- id: "macheu_noUser_10"
|
||||
text: "accurate"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "machine_heuristic_2_no_user"
|
||||
reference: "Sundar & Kim, 2019"
|
||||
101
config/questionnaires/microblog_and_social_network_usage.yaml
Normal file
101
config/questionnaires/microblog_and_social_network_usage.yaml
Normal file
@ -0,0 +1,101 @@
|
||||
questionnaire: "microblog"
|
||||
scales:
|
||||
- name: "microblog_profile"
|
||||
label: "Profile on Microblogging Services"
|
||||
items:
|
||||
- id: "microblog_profile"
|
||||
text: "Do you have a profile on a microblogging service (e.g., Bluesky, X, Truth Social)?"
|
||||
score_range: [1, 2]
|
||||
format: "categorial"
|
||||
calculation: "response"
|
||||
response_options:
|
||||
"1": "Yes"
|
||||
"2": "No"
|
||||
output: "microblog_profile"
|
||||
reference: "self"
|
||||
|
||||
- name: "microblog_usage_frequency"
|
||||
label: "Frequency of Logging into Microblogging Services"
|
||||
items:
|
||||
- id: "microblog_frequency"
|
||||
text: "How often do you log in to microblogging services?"
|
||||
score_range: [1, 7]
|
||||
format: "ordinal"
|
||||
calculation: "response"
|
||||
response_options:
|
||||
"1": "Less frequently"
|
||||
"2": "Monthly"
|
||||
"3": "A few times per month"
|
||||
"4": "Weekly"
|
||||
"5": "Several times a week"
|
||||
"6": "Daily"
|
||||
"7": "Several times a day"
|
||||
output: "microblog_usage_frequency"
|
||||
reference: "self"
|
||||
|
||||
- name: "microblog_usage_since_ai_use"
|
||||
label: "Change in Microblogging Service Usage Since Starting AI Use"
|
||||
items:
|
||||
- id: "microblog_ai"
|
||||
text: "Since you started using AI, do you use microblogging services less or more often?"
|
||||
score_range: [1, 5]
|
||||
format: "ordinal"
|
||||
calculation: "response"
|
||||
response_options:
|
||||
"1": "A lot less"
|
||||
"2": "Less"
|
||||
"3": "About the same"
|
||||
"4": "More"
|
||||
"5": "A lot more"
|
||||
output: "microblog_usage_since_ai_use"
|
||||
reference: "self"
|
||||
|
||||
- name: "professional_social_network_profile"
|
||||
label: "Profile on professional Social Networking Sites (SNS)"
|
||||
items:
|
||||
- id: "sns_profile"
|
||||
text: "Do you have a profile on a social networking site for professional purposes (e.g., LinkedIn)?"
|
||||
score_range: [1, 2]
|
||||
format: "categorial"
|
||||
calculation: "response"
|
||||
response_options:
|
||||
"1": "Yes"
|
||||
"2": "No"
|
||||
output: "professional_social_network_profile"
|
||||
reference: "self"
|
||||
|
||||
- name: "professional_social_network_usage_frequency"
|
||||
label: "Frequency of Logging into professional Social Networking Sites (SNS)"
|
||||
items:
|
||||
- id: "sns_frequency"
|
||||
text: "How often do you log in to social networking sites for professional purposes?"
|
||||
score_range: [1, 7]
|
||||
format: "ordinal"
|
||||
calculation: "response"
|
||||
response_options:
|
||||
"1": "Less frequently"
|
||||
"2": "Monthly"
|
||||
"3": "A few times per month"
|
||||
"4": "Weekly"
|
||||
"5": "Several times a week"
|
||||
"6": "Daily"
|
||||
"7": "Several times a day"
|
||||
output: "professional_social_network_usage_frequency"
|
||||
reference: "self"
|
||||
|
||||
- name: "professional_social_network_usage_since_ai_use"
|
||||
label: "Change in Professional Social Networking Site Usage Since Starting AI Use"
|
||||
items:
|
||||
- id: "sns_ai"
|
||||
text: "Since you started using AI, do you use professional social networking sites less or more often?"
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "response"
|
||||
response_options:
|
||||
"1": "A lot less"
|
||||
"2": "Less"
|
||||
"3": "About the same"
|
||||
"4": "More"
|
||||
"5": "A lot more"
|
||||
output: "professional_social_network_usage_since_ai_use"
|
||||
reference: "self"
|
||||
83
config/questionnaires/mind_perception.yaml
Normal file
83
config/questionnaires/mind_perception.yaml
Normal file
@ -0,0 +1,83 @@
|
||||
questionnaire: "mind_perception"
|
||||
scales:
|
||||
- name: "mind_perception_favorite_ai"
|
||||
label: "Perceived Mind Perception of Favorite AI"
|
||||
items:
|
||||
- id: "mindperc_User_fav_1"
|
||||
text: "(piped fav AI) can feel happy."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_2"
|
||||
text: "(piped fav AI) can love specific people."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_3"
|
||||
text: "(piped fav AI) can feel pleasure."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_4"
|
||||
text: "(piped fav AI) can experience gratitude."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_5"
|
||||
text: "(piped fav AI) can feel pain."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_6"
|
||||
text: "(piped fav AI) can feel stress."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_7"
|
||||
text: "(piped fav AI) can experience fear."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_8"
|
||||
text: "(piped fav AI) can feel tired."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_9"
|
||||
text: "(piped fav AI) can see and hear the world."
|
||||
inverse: false
|
||||
- id: "mindperc_User_fav_10"
|
||||
text: "(piped fav AI) can learn from instruction."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "mind_perception_favorite_ai"
|
||||
reference: "self"
|
||||
|
||||
- name: "mind_perception_no_user"
|
||||
label: "Perceived Mind Perception of AI in General"
|
||||
items:
|
||||
- id: "mindper_noUser_1"
|
||||
text: "AI can feel happy."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_2"
|
||||
text: "AI can love specific people."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_3"
|
||||
text: "AI can feel pleasure."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_4"
|
||||
text: "AI can experience gratitude."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_5"
|
||||
text: "AI can feel pain."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_6"
|
||||
text: "AI can feel stress."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_7"
|
||||
text: "AI can experience fear."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_8"
|
||||
text: "AI can feel tired."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_9"
|
||||
text: "AI can see and hear the world."
|
||||
inverse: false
|
||||
- id: "mindper_noUser_10"
|
||||
text: "AI can learn from instruction."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "mind_perception_no_user"
|
||||
reference: "self"
|
||||
|
||||
|
||||
14
config/questionnaires/modality.yaml
Normal file
14
config/questionnaires/modality.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
questionnaire: "modality"
|
||||
scales:
|
||||
- name: "modality_favorite_ai"
|
||||
label: "Modality used when interacting with favorite AI system"
|
||||
items:
|
||||
- id: "mod_User_fav"
|
||||
text: "How do you interact with [favorite AI] most often?"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "mostly via voice"
|
||||
"12": "mix of voice and text"
|
||||
"2": "mostly via text"
|
||||
output: "modality_favorite_ai"
|
||||
reference: "self"
|
||||
55
config/questionnaires/needs.yaml
Normal file
55
config/questionnaires/needs.yaml
Normal file
@ -0,0 +1,55 @@
|
||||
questionnaire: "needs"
|
||||
scales:
|
||||
- name: "need_to_belong"
|
||||
label: "need to belong"
|
||||
items:
|
||||
- id: "pers_specific_1"
|
||||
text: "I don't like being alone."
|
||||
inverse: false
|
||||
- id: "pers_specific_2"
|
||||
text: "I have a strong 'need to belong.'"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "need_to_belong"
|
||||
reference: "2 items from Baumeister & Leary (2015)"
|
||||
|
||||
- name: "need_for_cognition"
|
||||
label: "need for cognition"
|
||||
items:
|
||||
- id: "pers_specific_3"
|
||||
text: "I would prefer complex to simple problems."
|
||||
inverse: false
|
||||
- id: "pers_specific_4"
|
||||
text: "Thinking is not my idea of fun."
|
||||
inverse: true
|
||||
- id: "pers_specific_5"
|
||||
text: "I would rather do something that requires little thought than something that is sure to challenge my thinking abilities."
|
||||
inverse: true
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "need_for_cognition"
|
||||
reference: "3 items from Lins de Holanda Coelho et al. (2020)"
|
||||
|
||||
- name: "need_for_closure"
|
||||
label: "need for need_for_closure"
|
||||
items:
|
||||
- id: "pers_specific_6"
|
||||
text: "I dont like situations that are uncertain."
|
||||
inverse: false
|
||||
- id: "pers_specific_7"
|
||||
text: "I dislike questions which could be answered in many different ways."
|
||||
inverse: false
|
||||
- id: "pers_specific_8"
|
||||
text: " I feel uncomfortable when I dont understand the reason why an event occurred in my life."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 7 = strongly agree"
|
||||
output: "need_for_closure"
|
||||
reference: "3 items from Roets & van Hiel (2011)"
|
||||
80
config/questionnaires/needs_satisfaction.yaml
Normal file
80
config/questionnaires/needs_satisfaction.yaml
Normal file
@ -0,0 +1,80 @@
|
||||
questionnaire: "needs_satisfaction"
|
||||
scales:
|
||||
- name: "need_satisfaction_morality"
|
||||
label: "Satisfaction of morality need When interacting with AI systems"
|
||||
items:
|
||||
- id: "need_satisf_1"
|
||||
text: "When interacting with AI systems, I feel a strong sense of moral fulfilment."
|
||||
inverse: false
|
||||
- id: "need_satisf_2"
|
||||
text: "When interacting with AI systems, I feel that I am being a good person."
|
||||
inverse: false
|
||||
- id: "need_satisf_3"
|
||||
text: "When interacting with AI systems, I embody my moral values."
|
||||
inverse: false
|
||||
- id: "need_satisf_4"
|
||||
text: "When interacting with AI systems, I feel that I am doing the right thing."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Somewhat disagree, 3 = Neither agree nor disagree, 4 = Somewhat agree, 5 = Strongly agree"
|
||||
output: "need_satisfaction_morality"
|
||||
reference: "Prentice et al. (2019)"
|
||||
|
||||
- name: "need_satisfaction_competence"
|
||||
label: "Satisfaction of competence need When interacting with AI systems"
|
||||
items:
|
||||
- id: "need_satisf_5"
|
||||
text: "When interacting with AI systems, I feel very capable and effective."
|
||||
inverse: false
|
||||
- id: "need_satisf_6"
|
||||
text: "When interacting with AI systems, I feel like a competent person."
|
||||
inverse: false
|
||||
- id: "need_satisf_7"
|
||||
text: "When interacting with AI systems, I feel that I know what I’m doing."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Somewhat disagree, 3 = Neither agree nor disagree, 4 = Somewhat agree, 5 = Strongly agree"
|
||||
output: "need_satisfaction_competence"
|
||||
reference: "Prentice et al. (2019)"
|
||||
|
||||
- name: "need_satisfaction_autonomy"
|
||||
label: "Satisfaction of autonomy need When interacting with AI systems"
|
||||
items:
|
||||
- id: "need_satisf_8"
|
||||
text: "When interacting with AI systems, I feel free to be who I am."
|
||||
inverse: false
|
||||
- id: "need_satisf_9"
|
||||
text: "When interacting with AI systems, I have a say in what happens and I can voice my opinion."
|
||||
inverse: false
|
||||
- id: "need_satisf_10"
|
||||
text: "When interacting with AI systems, I believe that my choices are based on my true desires and values."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Somewhat disagree, 3 = Neither agree nor disagree, 4 = Somewhat agree, 5 = Strongly agree"
|
||||
output: "need_satisfaction_autonomy"
|
||||
reference: "Prentice et al. (2019)"
|
||||
|
||||
- name: "need_satisfaction_relatedness"
|
||||
label: "Satisfaction of relatedness need When interacting with AI systems"
|
||||
items:
|
||||
- id: "need_satisf_11"
|
||||
text: "When interacting with AI systems, I feel the support of others."
|
||||
inverse: false
|
||||
- id: "need_satisf_12"
|
||||
text: "When interacting with AI systems, I feel a sense of closeness and connectedness to others."
|
||||
inverse: false
|
||||
- id: "need_satisf_13"
|
||||
text: "When interacting with AI systems, I feel connected."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Somewhat disagree, 3 = Neither agree nor disagree, 4 = Somewhat agree, 5 = Strongly agree"
|
||||
output: "need_satisfaction_relatedness"
|
||||
reference: "self"
|
||||
16
config/questionnaires/number_of_tasks_delegated_to_ai.yaml
Normal file
16
config/questionnaires/number_of_tasks_delegated_to_ai.yaml
Normal file
@ -0,0 +1,16 @@
|
||||
questionnaire: "number_of_tasks_delegated_to_ai"
|
||||
scales:
|
||||
- name: "number_of_tasks_delegated_to_ai"
|
||||
label: "Number of Tasks Completed Entirely Using AI Assistance"
|
||||
items:
|
||||
- id: "no_tsks_delg"
|
||||
text: "In the last two months, how many tasks did you complete entirely using AI assistance? (By task we mean anything you personally consider a task)"
|
||||
calculation: "ordinal"
|
||||
response_options:
|
||||
"1": "None"
|
||||
"2": "1–2"
|
||||
"3": "3–5"
|
||||
"4": "6–10"
|
||||
"5": "More than 10"
|
||||
output: "number_of_tasks_delegated_to_ai"
|
||||
reference: "self"
|
||||
70
config/questionnaires/parasocial_behavior.yaml
Normal file
70
config/questionnaires/parasocial_behavior.yaml
Normal file
@ -0,0 +1,70 @@
|
||||
questionnaire: "parasocial_behavior"
|
||||
scales:
|
||||
- name: "parasocial_behavior_favorite_ai"
|
||||
label: "Extend to which you would behave in the following ways towards your favorite AI"
|
||||
items:
|
||||
- id: "behavior_User_fav_1"
|
||||
text: "Say “thank you” and “good bye” to it."
|
||||
inverse: false
|
||||
- id: "behavior_User_fav_2"
|
||||
text: "Occasionally react very emotionally towards it."
|
||||
inverse: false
|
||||
- id: "behavior_User_fav_3"
|
||||
text: "Sometimes gesture towards it."
|
||||
inverse: false
|
||||
- id: "behavior_User_fav_4"
|
||||
text: "Spontaneously say something to it in certain moments."
|
||||
inverse: false
|
||||
- id: "behavior_User_fav_5"
|
||||
text: "Occasionally shout something at it."
|
||||
inverse: false
|
||||
- id: "behavior_User_fav_6"
|
||||
text: "Express my thoughts and opinions to it."
|
||||
inverse: false
|
||||
- id: "behavior_User_fav_7"
|
||||
text: "Am unfriendly or rude."
|
||||
inverse: false
|
||||
- id: "behavior_User_fav_8"
|
||||
text: "Treat it the same way I treat other people."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Not at all, 7 = Very much"
|
||||
output: "parasocial_behavior_favorite_ai"
|
||||
reference: "adapted from Schramm, H., & Hartmann, T. (2008); https://doi.org/10.1515/COMM.2008.025"
|
||||
retain_single_items: true
|
||||
|
||||
- name: "parasocial_behavior_no_user"
|
||||
label: "Expected Behaviors If You Started Interacting with AI"
|
||||
items:
|
||||
- id: "behavior_noUser_1"
|
||||
text: "Say “thank you” and “good bye” to it."
|
||||
inverse: false
|
||||
- id: "behavior_noUser_2"
|
||||
text: "Occasionally react very emotionally towards it."
|
||||
inverse: false
|
||||
- id: "behavior_noUser_3"
|
||||
text: "Sometimes gesture towards it."
|
||||
inverse: false
|
||||
- id: "behavior_noUser_4"
|
||||
text: "Spontaneously say something to it in certain moments."
|
||||
inverse: false
|
||||
- id: "behavior_noUser_5"
|
||||
text: "Occasionally shout something at it."
|
||||
inverse: false
|
||||
- id: "behavior_noUser_6"
|
||||
text: "Express my thoughts and opinions to it."
|
||||
inverse: false
|
||||
- id: "behavior_noUser_7"
|
||||
text: "Am unfriendly or rude."
|
||||
inverse: false
|
||||
- id: "behavior_noUser_8"
|
||||
text: "Treat it the same way I treat other people."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Not at all, 7 = Very much"
|
||||
output: "parasocial_behavior_no_user"
|
||||
reference: "adapted from Schramm, H., & Hartmann, T. (2008); https://doi.org/10.1515/COMM.2008.025"
|
||||
51
config/questionnaires/perceived_anthropomorphism.yaml
Normal file
51
config/questionnaires/perceived_anthropomorphism.yaml
Normal file
@ -0,0 +1,51 @@
|
||||
questionnaire: "perceived_anthropomorphism"
|
||||
scales:
|
||||
- name: "perceived_anthropomorphism_favorite_ai"
|
||||
label: "Perceived anthropomorphism of favorite AI system"
|
||||
items:
|
||||
- id: "anthro_User_fav_1"
|
||||
text: "fake - natural"
|
||||
inverse: false
|
||||
- id: "anthro_User_fav_2"
|
||||
text: "machinelike - humanlike"
|
||||
inverse: false
|
||||
- id: "anthro_User_fav_3"
|
||||
text: "unconscious - conscious"
|
||||
inverse: false
|
||||
- id: "anthro_User_fav_4"
|
||||
text: "artificial - lifelike"
|
||||
inverse: false
|
||||
- id: "anthro_User_fav_5"
|
||||
text: "rigid - elegant"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "mean"
|
||||
response_options: "1 = agree with left option, 5 = agree with right option"
|
||||
output: "perceived_anthropomorphism_favorite_ai"
|
||||
reference: "Bartneck et al. (2009)"
|
||||
|
||||
- name: "perceived_anthropomorphism_ai_no_user"
|
||||
label: "Perceived anthropomorphism (no user)"
|
||||
items:
|
||||
- id: "anthro_noUser_1"
|
||||
text: "fake - natural"
|
||||
inverse: false
|
||||
- id: "anthro_noUser_2"
|
||||
text: "machinelike - humanlike"
|
||||
inverse: false
|
||||
- id: "anthro_noUser_3"
|
||||
text: "unconscious - conscious"
|
||||
inverse: false
|
||||
- id: "anthro_noUser_4"
|
||||
text: "artificial - lifelike"
|
||||
inverse: false
|
||||
- id: "anthro_noUser_5"
|
||||
text: "rigid - elegant"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "mean"
|
||||
response_options: "1 = agree with left option, 5 = agree with right option"
|
||||
output: "perceived_anthropomorphism_ai_no_user"
|
||||
reference: "Bartneck et al. (2009)"
|
||||
23
config/questionnaires/perceived_changes_attitudes_usage.yaml
Normal file
23
config/questionnaires/perceived_changes_attitudes_usage.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "perceived_changes_attitudes_usage"
|
||||
scales:
|
||||
- name: "perceived_changes_attitudes_usage"
|
||||
label: "Perceived Changes in Attitudes and Usage of AI Over the Past Year"
|
||||
items:
|
||||
- id: "changes_User_1"
|
||||
text: "I feel more comfortable using AI than I did a year ago."
|
||||
inverse: false
|
||||
- id: "changes_User_2"
|
||||
text: "I trust AI more now than I did a year ago."
|
||||
inverse: false
|
||||
- id: "changes_User_3"
|
||||
text: "I use AI more frequently now than I did a year ago."
|
||||
inverse: false
|
||||
- id: "changes_User_4"
|
||||
text: "I feel more anxious about AI than I did a year ago."
|
||||
inverse: true
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "perceived_changes_attitudes_usage"
|
||||
reference: "self"
|
||||
52
config/questionnaires/perceived_intelligence.yaml
Normal file
52
config/questionnaires/perceived_intelligence.yaml
Normal file
@ -0,0 +1,52 @@
|
||||
questionnaire: "perceived_intelligence"
|
||||
scales:
|
||||
- name: "perceived_intelligence_favorite_ai"
|
||||
label: "Perceived intelligence of favorite AI system"
|
||||
items:
|
||||
- id: "intell_User_fav_1"
|
||||
text: "foolish - sensible"
|
||||
inverse: false
|
||||
- id: "intell_User_fav_2"
|
||||
text: "incompetent - competent"
|
||||
inverse: false
|
||||
- id: "intell_User_fav_3"
|
||||
text: "unintelligent - intelligent"
|
||||
inverse: false
|
||||
- id: "intell_User_fav_4"
|
||||
text: "irresponsible - responsible"
|
||||
inverse: false
|
||||
- id: "intell_User_fav_5"
|
||||
text: "stupid - knowledgeable"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "mean"
|
||||
response_options: "1 = agree with left option, 5 = agree with right option"
|
||||
output: "perceived_intelligence_favorite_ai"
|
||||
reference: "Bartneck et al. (2009)"
|
||||
retain_single_items: true
|
||||
|
||||
- name: "perceived_intelligence_ai_no_user"
|
||||
label: "Perceived intelligence of AI (no user)"
|
||||
items:
|
||||
- id: "intell_noUser_1"
|
||||
text: "foolish - sensible"
|
||||
inverse: false
|
||||
- id: "intell_noUser_2"
|
||||
text: "incompetent - competent"
|
||||
inverse: false
|
||||
- id: "intell_noUser_3"
|
||||
text: "unintelligent - intelligent"
|
||||
inverse: false
|
||||
- id: "intell_noUser_4"
|
||||
text: "irresponsible - responsible"
|
||||
inverse: false
|
||||
- id: "intell_noUser_5"
|
||||
text: "stupid - knowledgeable"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "mean"
|
||||
response_options: "1 = agree with left option, 5 = agree with right option"
|
||||
output: "perceived_intelligence_ai_no_user"
|
||||
reference: "Bartneck et al. (2009)"
|
||||
23
config/questionnaires/perceived_lack_of_need.yaml
Normal file
23
config/questionnaires/perceived_lack_of_need.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "perceived_lack_of_need"
|
||||
scales:
|
||||
- name: "perceived_lack_of_need_no_user"
|
||||
label: "Perceived Lack of Need for AI Among Non-Users"
|
||||
items:
|
||||
- id: "lack_of_need_noUser_1"
|
||||
text: "I do not see any additional benefit in using AI when traditional methods already meet my needs."
|
||||
inverse: false
|
||||
- id: "lack_of_need_noUser_2"
|
||||
text: "The tools I currently use are entirely sufficient for my work and personal tasks."
|
||||
inverse: false
|
||||
- id: "lack_of_need_noUser_3"
|
||||
text: "I believe that AI does not offer a significant improvement over my existing technologies."
|
||||
inverse: false
|
||||
- id: "lack_of_need_noUser_4"
|
||||
text: "I am satisfied with my current solutions and feel no compulsion to try AI-based alternatives."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "perceived_lack_of_need_no_user"
|
||||
reference: "self"
|
||||
82
config/questionnaires/perceived_moral_agency.yaml
Normal file
82
config/questionnaires/perceived_moral_agency.yaml
Normal file
@ -0,0 +1,82 @@
|
||||
questionnaire: "perceived_moral_agency"
|
||||
scales:
|
||||
- name: "perceived_moral_agency_favorite_ai"
|
||||
label: "Perceived Moral Agency of Favorite AI"
|
||||
items:
|
||||
- id: "moralagency_User_fav_1"
|
||||
text: "(piped fav AI) has a sense for what is right and wrong."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_2"
|
||||
text: "(piped fav AI) can think through whether an action is moral."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_3"
|
||||
text: "(piped fav AI) might feel obligated to behave in a moral way."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_4"
|
||||
text: "(piped fav AI) is capable of being rational about good and evil."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_5"
|
||||
text: "(piped fav AI) behaves according to moral rules."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_6"
|
||||
text: "(piped fav AI) would refrain from doing things that have painful repercussions."
|
||||
inverse: false
|
||||
- id: "moralagency_User_fav_7"
|
||||
text: "(piped fav AI) can only behave how it is programmed to behave."
|
||||
inverse: true
|
||||
- id: "moralagency_User_fav_8"
|
||||
text: "(piped fav AI)'s actions are the result of its programming."
|
||||
inverse: true
|
||||
- id: "moralagency_User_fav_9"
|
||||
text: "(piped fav AI) can only do what humans tell it to do."
|
||||
inverse: true
|
||||
- id: "moralagency_User_fav_10"
|
||||
text: "(piped fav AI) would never do anything it was not programmed to do."
|
||||
inverse: true
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "perceived_moral_agency_favorite_ai"
|
||||
reference: "self"
|
||||
|
||||
- name: "perceived_moral_agency_no_user"
|
||||
label: "Perceived Moral Agency of AI in General"
|
||||
items:
|
||||
- id: "moralagency_noUser_1"
|
||||
text: "AI has a sense for what is right and wrong."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_2"
|
||||
text: "AI can think through whether an action is moral."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_3"
|
||||
text: "AI might feel obligated to behave in a moral way."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_4"
|
||||
text: "AI is capable of being rational about good and evil."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_5"
|
||||
text: "AI behaves according to moral rules."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_6"
|
||||
text: "AI would refrain from doing things that have painful repercussions."
|
||||
inverse: false
|
||||
- id: "moralagency_noUser_7"
|
||||
text: "AI can only behave how it is programmed to behave."
|
||||
inverse: true
|
||||
- id: "moralagency_noUser_8"
|
||||
text: "AI's actions are the result of its programming."
|
||||
inverse: true
|
||||
- id: "moralagency_noUser_9"
|
||||
text: "AI can only do what humans tell it to do."
|
||||
inverse: true
|
||||
- id: "moralagency_noUser_10"
|
||||
text: "AI would never do anything it was not programmed to do."
|
||||
inverse: true
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "perceived_moral_agency_no_user"
|
||||
reference: "self"
|
||||
|
||||
26
config/questionnaires/perceived_reliance_on_ai.yaml
Normal file
26
config/questionnaires/perceived_reliance_on_ai.yaml
Normal file
@ -0,0 +1,26 @@
|
||||
questionnaire: "perceived_reliance_on_ai"
|
||||
scales:
|
||||
- name: "perceived_reliance_on_ai_user"
|
||||
label: "Perceived Reliance on AI in General"
|
||||
items:
|
||||
- id: "perc_reliance_User_1"
|
||||
text: "I feel unprotected when I do not have access to AI."
|
||||
inverse: false
|
||||
- id: "perc_reliance_User_2"
|
||||
text: "I am concerned about the idea of being left behind in my tasks or projects if I do not use AI."
|
||||
inverse: false
|
||||
- id: "perc_reliance_User_3"
|
||||
text: "I do everything possible to stay updated with AI to impress or remain relevant in my field."
|
||||
inverse: false
|
||||
- id: "perc_reliance_User_4"
|
||||
text: "I constantly need validation or feedback from AI systems to feel confident in my decisions."
|
||||
inverse: false
|
||||
- id: "perc_reliance_User_5"
|
||||
text: "I fear that AI might replace my current skills or abilities."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "perceived_reliance_on_ai_user"
|
||||
reference: "self"
|
||||
23
config/questionnaires/perceived_role_ai_should_take.yaml
Normal file
23
config/questionnaires/perceived_role_ai_should_take.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "perceived_role_ai_should_take"
|
||||
scales:
|
||||
- name: "perceived_role_ai_should_take_no_user"
|
||||
label: "Perceived Role AI Systems SHOULD Take"
|
||||
items:
|
||||
- id: "shld_role_part_noUser"
|
||||
text: "In your understanding: What role SHOULD AI systems generally take when one interacts with them?"
|
||||
open_ended_id: "shld_role_part_noUser_11_TEXT"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Child"
|
||||
"2": "Servant"
|
||||
"3": "Pet"
|
||||
"4": "Student"
|
||||
"5": "Partner"
|
||||
"6": "Teacher"
|
||||
"7": "Boss"
|
||||
"8": "Friend"
|
||||
"9": "Colleague"
|
||||
"10": "Master"
|
||||
"11": "Other"
|
||||
output: "perceived_role_ai_should_take_no_user"
|
||||
reference: "self"
|
||||
45
config/questionnaires/perceived_role_of_ai.yaml
Normal file
45
config/questionnaires/perceived_role_of_ai.yaml
Normal file
@ -0,0 +1,45 @@
|
||||
questionnaire: "perceived_role_of_ai"
|
||||
scales:
|
||||
- name: "perceived_role_of_ai_favorite_ai"
|
||||
label: "Perceived Role of Favorite AI in Interaction"
|
||||
items:
|
||||
- id: "role_partner_User_fav"
|
||||
text: "What role does (piped fav AI) take when you interact with it? Please select the option that fits most in most cases."
|
||||
open_ended_id: "role_partner_User_fav_11_TEXT"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Child"
|
||||
"2": "Servant"
|
||||
"3": "Pet"
|
||||
"4": "Student"
|
||||
"5": "Partner"
|
||||
"6": "Teacher"
|
||||
"7": "Boss"
|
||||
"8": "Friend"
|
||||
"9": "Colleague"
|
||||
"10": "Master"
|
||||
"11": "Other"
|
||||
output: "perceived_role_of_ai_favorite_ai"
|
||||
reference: "Sarigul, B., Schneider, F. M., & Utz, S. (2025). Believe it or not? https://doi.org/10.1080/10447318.2024.2375797"
|
||||
|
||||
- name: "perceived_role_of_ai_no_user"
|
||||
label: "Perceived Role of AI Systems in Interaction"
|
||||
items:
|
||||
- id: "role_partner_noUser"
|
||||
text: "In your understanding: What role do AI systems generally take when one interacts with them? Please select the option that fits most in most cases."
|
||||
open_ended_id: "role_partner_noUser_11_TEXT"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "Child"
|
||||
"2": "Servant"
|
||||
"3": "Pet"
|
||||
"4": "Student"
|
||||
"5": "Partner"
|
||||
"6": "Teacher"
|
||||
"7": "Boss"
|
||||
"8": "Friend"
|
||||
"9": "Colleague"
|
||||
"10": "Master"
|
||||
"11": "Other"
|
||||
output: "perceived_role_of_ai_no_user"
|
||||
reference: "Sarigul, B., Schneider, F. M., & Utz, S. (2025). Believe it or not? https://doi.org/10.1080/10447318.2024.2375797"
|
||||
@ -0,0 +1,297 @@
|
||||
questionnaire: "perceived_task_characteristics_delegation"
|
||||
scales:
|
||||
- name: "perceived_task_characteristics_email"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Work-Related Email"
|
||||
items:
|
||||
- id: "del_task_diff_1_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_1_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_1_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_1_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_1_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_1_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_1_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_1_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_email"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
|
||||
- name: "perceived_task_characteristics_formal_letter"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Formal Letter"
|
||||
items:
|
||||
- id: "del_task_diff_2_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_2_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_2_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_2_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_2_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_2_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_2_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_2_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_formal_letter"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
|
||||
- name: "perceived_task_characteristics_job_application"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Job Application"
|
||||
items:
|
||||
- id: "del_task_diff_3_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_3_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_3_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_3_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_3_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_3_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_3_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_3_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_job_application"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
|
||||
- name: "perceived_task_characteristics_meeting_summary"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Brief Business Meeting Summary"
|
||||
items:
|
||||
- id: "del_task_diff_4_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_4_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_4_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_4_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_4_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_4_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_4_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_4_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_meeting_summary"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
|
||||
- name: "perceived_task_characteristics_condolence_card"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Condolence Card"
|
||||
items:
|
||||
- id: "del_task_diff_5_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_5_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_5_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_5_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_5_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_5_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_5_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_5_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_condolence_card"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
|
||||
- name: "perceived_task_characteristics_social_media_post"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Social Media Post"
|
||||
items:
|
||||
- id: "del_task_diff_6_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_6_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_6_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_6_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_6_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_6_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_6_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_6_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_social_media_post"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
|
||||
- name: "perceived_task_characteristics_personal_letter"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Personal Letter"
|
||||
items:
|
||||
- id: "del_task_diff_7_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_7_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_7_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_7_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_7_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_7_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_7_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_7_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_personal_letter"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
|
||||
- name: "perceived_task_characteristics_birthday_invitation"
|
||||
label: "Perceived Task Characteristics and Trust in AI for Writing a Birthday Invitation Card"
|
||||
items:
|
||||
- id: "del_task_diff_8_1"
|
||||
text: "This task requires social skills to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_8_2"
|
||||
text: "This task requires creativity to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_8_3"
|
||||
text: "This task requires a great deal of time or effort to complete."
|
||||
inverse: false
|
||||
- id: "del_task_diff_8_4"
|
||||
text: "It takes significant training or expertise to be qualified for this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_8_5"
|
||||
text: "I am confident in my own abilities to complete this task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_8_6"
|
||||
text: "I trust an AI system’s ability to reliably complete the task."
|
||||
inverse: false
|
||||
- id: "del_task_diff_8_7"
|
||||
text: "Understanding the reasons behind the AI agent's actions is important for me to trust the AI agent on this task (e.g., explanations are necessary)."
|
||||
inverse: false
|
||||
- id: "del_task_diff_8_8"
|
||||
text: "I trust an AI system's decisions to protect my interests and align with my values for this task."
|
||||
inverse: false
|
||||
score_range: [1, 3]
|
||||
format: "Agreement Scale"
|
||||
calculation: "mapped_mean"
|
||||
response_options:
|
||||
1: -1 # Disagree
|
||||
2: 1 # Agree
|
||||
3: 0 # Neutral
|
||||
output: "perceived_task_characteristics_birthday_invitation"
|
||||
reference: "Lubars, B., & Tan, C. (2019)"
|
||||
23
config/questionnaires/perception_of_being_left_behind.yaml
Normal file
23
config/questionnaires/perception_of_being_left_behind.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "perception_of_being_left_behind"
|
||||
scales:
|
||||
- name: "perception_of_being_left_behind"
|
||||
label: "Perceived Ability to Keep Up with AI Advancements"
|
||||
items:
|
||||
- id: "leftbehind_1"
|
||||
text: "I feel left behind by how quickly AI is advancing."
|
||||
inverse: false
|
||||
- id: "leftbehind_2"
|
||||
text: "I worry I won’t keep up with new AI tools."
|
||||
inverse: false
|
||||
- id: "leftbehind_3"
|
||||
text: "I feel confident I can adapt to AI changes."
|
||||
inverse: true
|
||||
- id: "leftbehind_4"
|
||||
text: "I think AI is moving faster than I can learn."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "perception_of_being_left_behind"
|
||||
reference: "self"
|
||||
29
config/questionnaires/perception_tool_actor.yaml
Normal file
29
config/questionnaires/perception_tool_actor.yaml
Normal file
@ -0,0 +1,29 @@
|
||||
questionnaire: "perception_tool_actor"
|
||||
scales:
|
||||
- name: "perception_tool_actor_favorite_ai"
|
||||
label: "Perception of favorite AI system as tool or social actor"
|
||||
items:
|
||||
- id: "partner_tool_User_fav"
|
||||
text: "How do you usually perceive [favorite AI] while using it?"
|
||||
score_range: [1, 2]
|
||||
format: "categorical"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "More like a tool"
|
||||
"2": "More like a social actor"
|
||||
output: "perception_tool_actor_favorite_ai"
|
||||
reference: "self"
|
||||
|
||||
- name: "perception_tool_actor_no_user"
|
||||
label: "Perception of general AI system as tool or social actor"
|
||||
items:
|
||||
- id: "partner_tool_noUser"
|
||||
text: "How do you usually perceive AI systems in general?"
|
||||
score_range: [1, 2]
|
||||
format: "categorical"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "More like a tool"
|
||||
"2": "More like a social actor"
|
||||
output: "perception_tool_actor_no_user"
|
||||
reference: "self"
|
||||
35
config/questionnaires/personality_specific_traits.yaml
Normal file
35
config/questionnaires/personality_specific_traits.yaml
Normal file
@ -0,0 +1,35 @@
|
||||
questionnaire: "personality_specific_traits"
|
||||
scales:
|
||||
- name: "personality_specific_traits"
|
||||
label: "Personality-Specific Traits"
|
||||
items:
|
||||
- id: "pers_specific_1"
|
||||
text: "I don't like being alone."
|
||||
inverse: false
|
||||
- id: "pers_specific_2"
|
||||
text: "I have a strong 'need to belong.'"
|
||||
inverse: false
|
||||
- id: "pers_specific_3"
|
||||
text: "I would prefer complex to simple problems."
|
||||
inverse: false
|
||||
- id: "pers_specific_4"
|
||||
text: "Thinking is not my idea of fun."
|
||||
inverse: true
|
||||
- id: "pers_specific_5"
|
||||
text: "I would rather do something that requires little thought than something that is sure to challenge my thinking abilities."
|
||||
inverse: true
|
||||
- id: "pers_specific_6"
|
||||
text: "I don’t like situations that are uncertain."
|
||||
inverse: false
|
||||
- id: "pers_specific_7"
|
||||
text: "I dislike questions which could be answered in many different ways."
|
||||
inverse: false
|
||||
- id: "pers_specific_8"
|
||||
text: "I feel uncomfortable when I don’t understand the reason why an event occurred in my life."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "personality_specific_traits"
|
||||
reference: "Leary, M. R et al (2013); https://doi.org/10.1080/00223891.2013.819511"
|
||||
23
config/questionnaires/potential_motivators_for_ai_usage.yaml
Normal file
23
config/questionnaires/potential_motivators_for_ai_usage.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "potential_motivators_for_ai_usage"
|
||||
scales:
|
||||
- name: "potential_motivators_for_ai_usage"
|
||||
label: "Motivators that could increase AI Usage"
|
||||
items:
|
||||
- id: "motivator_1"
|
||||
text: "I would use AI if it were easier to understand."
|
||||
inverse: false
|
||||
- id: "motivator_2"
|
||||
text: "I would use AI if it felt more trustworthy."
|
||||
inverse: false
|
||||
- id: "motivator_3"
|
||||
text: "I would use AI if it saved me time."
|
||||
inverse: false
|
||||
- id: "motivator_4"
|
||||
text: "I would use AI if it fit my daily routines."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "potential_motivators_for_ai_usage"
|
||||
reference: "self"
|
||||
23
config/questionnaires/preference_for_status_quo.yaml
Normal file
23
config/questionnaires/preference_for_status_quo.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "preference_for_status_quo"
|
||||
scales:
|
||||
- name: "preference_for_status_quo_no_user"
|
||||
label: "Preference for Status Quo Among Non-Users of AI"
|
||||
items:
|
||||
- id: "pref_statu_Quo_noUser_1"
|
||||
text: "I feel confident in my current digital skills, so I do not need additional AI assistance."
|
||||
inverse: false
|
||||
- id: "pref_statu_Quo_noUser_2"
|
||||
text: "I am comfortable using the digital tools I have mastered without incorporating AI."
|
||||
inverse: false
|
||||
- id: "pref_statu_Quo_noUser_3"
|
||||
text: "Learning new AI technologies seems unnecessary given my proficiency with existing methods."
|
||||
inverse: false
|
||||
- id: "pref_statu_Quo_noUser_4"
|
||||
text: "I prefer to rely on familiar digital practices rather than exploring novel AI innovations."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "preference_for_status_quo_no_user"
|
||||
reference: "self"
|
||||
35
config/questionnaires/preferred_level_of_delegation.yaml
Normal file
35
config/questionnaires/preferred_level_of_delegation.yaml
Normal file
@ -0,0 +1,35 @@
|
||||
questionnaire: "preferred_level_of_delegation"
|
||||
scales:
|
||||
- name: "preferred_level_of_delegation"
|
||||
label: "Preferred Level of AI Assistance for Writing Tasks"
|
||||
items:
|
||||
- id: "delg_1"
|
||||
text: "Writing a work-related email."
|
||||
inverse: false
|
||||
- id: "delg_2"
|
||||
text: "Writing a formal letter, e.g., to public authority."
|
||||
inverse: false
|
||||
- id: "delg_3"
|
||||
text: "Writing a job application."
|
||||
inverse: false
|
||||
- id: "delg_4"
|
||||
text: "Writing a brief business meeting summary."
|
||||
inverse: false
|
||||
- id: "delg_5"
|
||||
text: "Writing a condolence card to a friend."
|
||||
inverse: false
|
||||
- id: "delg_6"
|
||||
text: "Writing a social media post."
|
||||
inverse: false
|
||||
- id: "delg_7"
|
||||
text: "Writing a personal letter to a friend or relative."
|
||||
inverse: false
|
||||
- id: "delg_8"
|
||||
text: "Writing a birthday invitation card to friends."
|
||||
inverse: false
|
||||
score_range: [1, 4]
|
||||
format: "Multiple Choice (Preference Scale)"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Full AI automation, 2 = AI leads and the human assists, 3 = Human leads and AI assists, 4 = No AI assistance"
|
||||
output: "preferred_level_of_delegation"
|
||||
reference: "self"
|
||||
100
config/questionnaires/reason_for_not_using_ai.yaml
Normal file
100
config/questionnaires/reason_for_not_using_ai.yaml
Normal file
@ -0,0 +1,100 @@
|
||||
questionnaire: "reason_for_not_using_ai"
|
||||
scales:
|
||||
- name: "reason_for_not_using_ai"
|
||||
label: "Reasons for Not Currently Using AI Systems"
|
||||
items:
|
||||
- id: "reason_why_noUser_1_1"
|
||||
label: "lack_knowledge"
|
||||
text: "I lack knowledge about how to use AI systems effectively."
|
||||
- id: "reason_why_noUser_1_2"
|
||||
label: "privacy_concerns"
|
||||
text: "I am concerned about the privacy and security of using AI."
|
||||
- id: "reason_why_noUser_1_3"
|
||||
label: "no_use_case"
|
||||
text: "I haven't found a clear need or use case for AI in my daily tasks."
|
||||
- id: "reason_why_noUser_1_4"
|
||||
label: "cost_barrier"
|
||||
text: "The cost of AI systems is a barrier for me."
|
||||
- id: "reason_why_noUser_1_5"
|
||||
label: "reliability_concerns"
|
||||
text: "I am unsure about the reliability or accuracy of AI systems."
|
||||
- id: "reason_why_noUser_1_6"
|
||||
label: "prefer_traditional"
|
||||
text: "I prefer traditional methods and am hesitant to adopt new technology."
|
||||
- id: "reason_why_noUser_1_7"
|
||||
label: "waiting_maturity"
|
||||
text: "I am waiting for AI technology to become more mature and widely adopted."
|
||||
- id: "reason_why_noUser_1_8"
|
||||
label: "no_resources"
|
||||
text: "I don't have access to the necessary resources to use AI."
|
||||
- id: "reason_why_noUser_1_9"
|
||||
label: "ethical_concerns"
|
||||
text: "I am concerned about the ethical implications of using AI."
|
||||
- id: "reason_why_noUser_1_10"
|
||||
label: "no_time"
|
||||
text: "I haven't had the time to learn about or explore AI systems."
|
||||
- id: "reason_why_noUser_1_11"
|
||||
label: "dislike_ai"
|
||||
text: "I don’t like AI."
|
||||
- id: "reason_why_noUser_1_12"
|
||||
label: "other"
|
||||
text: "Other reason (please name)."
|
||||
open_ended_id: "reason_why_noUser_1_12_TEXT"
|
||||
score_range: [1, 2]
|
||||
format: "Multiple Selection"
|
||||
calculation: "multiple_selection"
|
||||
response_options:
|
||||
"1": "True"
|
||||
"2": "False"
|
||||
output: "reason_for_not_using_ai"
|
||||
reference: "self"
|
||||
|
||||
- name: "strong_reason_for_not_using_ai"
|
||||
label: "Strength of Each Reason for Not Currently Using AI Systems (is this a strong reason for you?)"
|
||||
items:
|
||||
- id: "reason_why_noUser_2_1"
|
||||
label: "strong_reason_lack_knowledge"
|
||||
text: "Lack of knowledge about how to use AI systems effectively."
|
||||
- id: "reason_why_noUser_2_2"
|
||||
label: "strong_reason_privacy_concerns"
|
||||
text: "Concerns about the privacy and security of using AI."
|
||||
- id: "reason_why_noUser_2_3"
|
||||
label: "strong_reason_no_use_case"
|
||||
text: "Not having found a clear need or use case for AI in daily tasks."
|
||||
- id: "reason_why_noUser_2_4"
|
||||
label: "strong_reason_cost_barrier"
|
||||
text: "The cost of AI systems being a barrier."
|
||||
- id: "reason_why_noUser_2_5"
|
||||
label: "strong_reason_reliability_concerns"
|
||||
text: "Uncertainty about the reliability or accuracy of AI systems."
|
||||
- id: "reason_why_noUser_2_6"
|
||||
label: "strong_reason_prefer_traditional"
|
||||
text: "Preference for traditional methods and hesitation to adopt new technology."
|
||||
- id: "reason_why_noUser_2_7"
|
||||
label: "strong_reason_waiting_maturity"
|
||||
text: "Waiting for AI technology to become more mature and widely adopted."
|
||||
- id: "reason_why_noUser_2_8"
|
||||
label: "strong_reason_no_resources"
|
||||
text: "Lack of access to the necessary resources to use AI."
|
||||
- id: "reason_why_noUser_2_9"
|
||||
label: "strong_reason_ethical_concerns"
|
||||
text: "Concerns about the ethical implications of using AI."
|
||||
- id: "reason_why_noUser_2_10"
|
||||
label: "strong_reason_no_time"
|
||||
text: "Not having had the time to learn about or explore AI systems."
|
||||
- id: "reason_why_noUser_2_11"
|
||||
label: "strong_reason_dislike_ai"
|
||||
text: "Disliking AI."
|
||||
- id: "reason_why_noUser_2_12"
|
||||
label: "strong_reason_other"
|
||||
text: "Other reason (please name)."
|
||||
open_ended_id: "reason_why_noUser_2_12_TEXT"
|
||||
score_range: [1, 2]
|
||||
format: "Multiple Selection"
|
||||
calculation: "multiple_selection"
|
||||
response_options:
|
||||
"1": "True"
|
||||
"2": "False"
|
||||
output: "strong_reason_for_not_using_ai"
|
||||
reference: "self"
|
||||
|
||||
36
config/questionnaires/risk_opportunity_perception.yaml
Normal file
36
config/questionnaires/risk_opportunity_perception.yaml
Normal file
@ -0,0 +1,36 @@
|
||||
questionnaire: "risk_opportunity_perception"
|
||||
|
||||
scales:
|
||||
- name: "risk_opportunity_perception"
|
||||
label: "Perceived Risks and Opportunities of AI"
|
||||
items:
|
||||
- id: "rop_1"
|
||||
text: "How concerned, if at all, are you when you think about AI?"
|
||||
inverse: false
|
||||
- id: "rop_2"
|
||||
text: "How pessimistic, if at all, are you when you imagine the future use of AI?"
|
||||
inverse: false
|
||||
- id: "rop_3"
|
||||
text: "How likely would AI negatively influence your life?"
|
||||
inverse: false
|
||||
- id: "rop_4"
|
||||
text: "How likely would you suffer from the implementation of AI into everyday life?"
|
||||
inverse: false
|
||||
- id: "rop_5"
|
||||
text: " How confident, if at all, do you feel when you think about the potential of AI?"
|
||||
inverse: false
|
||||
- id: "rop_6"
|
||||
text: "How optimistic, if at all, are you when you imagine the future use of AI?"
|
||||
inverse: false
|
||||
- id: "rop_7"
|
||||
text: "How likely would AI positively influence your life?"
|
||||
inverse: false
|
||||
- id: "rop_8"
|
||||
text: "How likely would you benefit from the implementation of AI into everyday life?"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "bipolar"
|
||||
calculation: "mean"
|
||||
response_options: "1 = not at all, 5 = extremely"
|
||||
output: "risk_opportunity_score"
|
||||
reference: "Adapted from Walpole & Wilson, 2021; Schwesig et al., 2023"
|
||||
23
config/questionnaires/security_concerns.yaml
Normal file
23
config/questionnaires/security_concerns.yaml
Normal file
@ -0,0 +1,23 @@
|
||||
questionnaire: "security_concerns"
|
||||
scales:
|
||||
- name: "security_concerns_no_user"
|
||||
label: "Security and Privacy Concerns About AI Among Non-Users"
|
||||
items:
|
||||
- id: "sec_concern_noUser_1"
|
||||
text: "I worry that AI systems leave my personal information vulnerable."
|
||||
inverse: false
|
||||
- id: "sec_concern_noUser_2"
|
||||
text: "I worry that using AI in daily applications makes it easier for my data to be misused."
|
||||
inverse: false
|
||||
- id: "sec_concern_noUser_3"
|
||||
text: "I feel that AI lacks transparency about how it collects and processes my information."
|
||||
inverse: false
|
||||
- id: "sec_concern_noUser_4"
|
||||
text: "I believe that companies using AI to analyze data put my privacy at risk."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Somewhat disagree, 4 = Neither disagree nor agree, 5 = Somewhat agree, 6 = Agree, 7 = Strongly agree"
|
||||
output: "security_concerns_no_user"
|
||||
reference: "self"
|
||||
89
config/questionnaires/self_efficacy.yaml
Normal file
89
config/questionnaires/self_efficacy.yaml
Normal file
@ -0,0 +1,89 @@
|
||||
questionnaire: "self_efficacy"
|
||||
scales:
|
||||
- name: "self_efficacy_without_ai_creativity"
|
||||
label: "Creativity Self-Efficacy Without Considering AI Assistance"
|
||||
items:
|
||||
- id: "self_effcy_1"
|
||||
text: "Regarding my creativity: I have confidence in my ability to generate novel and useful ideas."
|
||||
inverse: false
|
||||
- id: "self_effcy_2"
|
||||
text: "Regarding my creativity: I am confident that my creative performance in a given task is good."
|
||||
inverse: false
|
||||
- id: "self_effcy_3"
|
||||
text: "Regarding my creativity: I have a good knowledge base and creative strategy."
|
||||
inverse: false
|
||||
- id: "self_effcy_4"
|
||||
text: "Regarding my creativity: I know that my creative abilities are good."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "self_efficacy_without_ai_creativity"
|
||||
reference: "Puente‐Díaz, R., & Cavazos‐Arroyo, J. (2022)"
|
||||
|
||||
- name: "self_efficacy_without_ai_problem_solving"
|
||||
label: "Problem-Solving Self-Efficacy Without Considering AI Assistance"
|
||||
items:
|
||||
- id: "self_effcy_5"
|
||||
text: "Regarding my problem-solving skills: I feel capable of coping with my problems."
|
||||
inverse: false
|
||||
- id: "self_effcy_6"
|
||||
text: "Regarding my problem-solving skills: Typically, my problems with any given task are not too big or hard for me to solve."
|
||||
inverse: false
|
||||
- id: "self_effcy_7"
|
||||
text: "Regarding my problem-solving skills: Most of my plans for solving my problems on a given task are ones that would really work."
|
||||
inverse: false
|
||||
- id: "self_effcy_8"
|
||||
text: "Regarding my problem-solving skills: I can typically handle my problems in any given task; they are not beyond me."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "self_efficacy_without_ai_problem_solving"
|
||||
reference: "Heppner et al., 2001"
|
||||
|
||||
- name: "self_efficacy_with_ai_creativity"
|
||||
label: "Creativity self-efficacy with AI"
|
||||
items:
|
||||
- id: "self_effcy_wAI_1"
|
||||
text: "Regarding my creativity: I have confidence in my ability to generate novel and useful ideas."
|
||||
inverse: false
|
||||
- id: "self_effcy_wAI_2"
|
||||
text: "Regarding my creativity: I am confident that my creative performance in a given task is good."
|
||||
inverse: false
|
||||
- id: "self_effcy_wAI_3"
|
||||
text: "Regarding my creativity: I have a good knowledge base and creative strategy."
|
||||
inverse: false
|
||||
- id: "self_effcy_wAI_4"
|
||||
text: "Regarding my creativity: I know that my creative abilities are good."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "self_efficacy_with_ai_creativity"
|
||||
reference: "Puente-Diaz & Cavazos-Arroyo, 2022"
|
||||
|
||||
- name: "self_efficacy_with_ai_problem_solving"
|
||||
label: "Problem solving self-efficacy with AI"
|
||||
items:
|
||||
- id: "self_effcy_wAI_5"
|
||||
text: "Regarding my problem-solving skills: I feel capable of coping with my problems."
|
||||
inverse: false
|
||||
- id: "self_effcy_wAI_6"
|
||||
text: "Regarding my creativity: I am confident that my creative performance in a given task is good."
|
||||
inverse: false
|
||||
- id: "self_effcy_wAI_7"
|
||||
text: "Regarding my creativity: I have a good knowledge base and creative strategy."
|
||||
inverse: false
|
||||
- id: "self_effcy_wAI_8"
|
||||
text: "Regarding my creativity: I know that my creative abilities are good."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 2 = Disagree, 3 = Neutral, 4 = Agree, 5 = Strongly agree"
|
||||
output: "self_efficacy_with_ai_problem_solving"
|
||||
reference: "Heppner et al., 2001"
|
||||
46
config/questionnaires/social_presence.yaml
Normal file
46
config/questionnaires/social_presence.yaml
Normal file
@ -0,0 +1,46 @@
|
||||
questionnaire: "social_presence"
|
||||
scales:
|
||||
- name: "social_presence_sense_favorite_ai"
|
||||
label: "Social presence of favorite AI system (humanness)"
|
||||
items:
|
||||
- id: "social_pres_User_fav_1"
|
||||
text: "There is a sense of human contact in the interaction."
|
||||
inverse: false
|
||||
- id: "social_pres_User_fav_2"
|
||||
text: "There is a sense of personalness in the interaction."
|
||||
inverse: false
|
||||
- id: "social_pres_User_fav_3"
|
||||
text: "There is a feeling of sociability in the interaction."
|
||||
inverse: false
|
||||
- id: "social_pres_User_fav_4"
|
||||
text: "There is a feeling of human warmth in the interaction."
|
||||
inverse: false
|
||||
- id: "social_pres_User_fav_5"
|
||||
text: "There is a feeling of human sensitivity in the interaction."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 7 = Strongly agree"
|
||||
output: "social_presence_sense_favorite_ai"
|
||||
reference: "Gefen & Straub (2004)"
|
||||
retain_single_items: true
|
||||
|
||||
- name: "social_presence_being_favorite_ai"
|
||||
label: "Social presence of favorite AI system (intelligence)"
|
||||
items:
|
||||
- id: "social_pres2_fav_1"
|
||||
text: "...how much do you feel you are interacting with an intelligent being?"
|
||||
inverse: false
|
||||
- id: "social_pres2_fav_2"
|
||||
text: "...how much do you feel you are in the company of an intelligent being?"
|
||||
inverse: false
|
||||
- id: "social_pres2_fav_3"
|
||||
text: "...how much do you feel an intelligent being is responding to you?"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Strongly disagree, 7 = Strongly agree"
|
||||
output: "social_presence_being_favorite_ai"
|
||||
reference: "Lee, K. M., et al. (2006)"
|
||||
81
config/questionnaires/task_types.yaml
Normal file
81
config/questionnaires/task_types.yaml
Normal file
@ -0,0 +1,81 @@
|
||||
questionnaire: "task_types"
|
||||
scales:
|
||||
- name: "task_types_general"
|
||||
label: "General Purposes for Using AI Systems"
|
||||
items:
|
||||
- id: "tsk_typs_1"
|
||||
label: "writing"
|
||||
text: "Writing"
|
||||
- id: "tsk_typs_2"
|
||||
label: "content_creation"
|
||||
text: "Content Creation (excluding writing)"
|
||||
- id: "tsk_typs_3"
|
||||
label: "creative_idea"
|
||||
text: "Creative Idea Exploration"
|
||||
- id: "tsk_typs_4"
|
||||
label: "info_search"
|
||||
text: "Information Search"
|
||||
- id: "tsk_typs_5"
|
||||
label: "advice"
|
||||
text: "Advice and Recommendation"
|
||||
- id: "tsk_typs_6"
|
||||
label: "learning"
|
||||
text: "Explanations and Learning"
|
||||
- id: "tsk_typs_7"
|
||||
label: "analysis"
|
||||
text: "Analysis and Processing"
|
||||
- id: "tsk_typs_8"
|
||||
label: "automation"
|
||||
text: "Automation and Productivity"
|
||||
- id: "tsk_typs_9"
|
||||
label: "other"
|
||||
text: "Other"
|
||||
open_ended_id: "tsk_typs_9_TEXT"
|
||||
score_range: [0, 1]
|
||||
format: "Multiple Selection"
|
||||
calculation: "multiple_selection"
|
||||
response_options:
|
||||
"1": "Selected"
|
||||
"0": "Not Selected"
|
||||
output: "task_types_general"
|
||||
reference: "self"
|
||||
|
||||
- name: "task_types_favorite_ai"
|
||||
label: "Types of Tasks for Which Favorite AI Is Used"
|
||||
items:
|
||||
- id: "tsk_typs_User_fav_1"
|
||||
label: "writing"
|
||||
text: "Writing"
|
||||
- id: "tsk_typs_User_fav_2"
|
||||
label: "content_creation"
|
||||
text: "Content Creation (excluding writing)"
|
||||
- id: "tsk_typs_User_fav_3"
|
||||
label: "creative_idea"
|
||||
text: "Creative Idea Exploration"
|
||||
- id: "tsk_typs_User_fav_4"
|
||||
label: "info_search"
|
||||
text: "Information Search"
|
||||
- id: "tsk_typs_User_fav_5"
|
||||
label: "advice"
|
||||
text: "Advice and Recommendation"
|
||||
- id: "tsk_typs_User_fav_6"
|
||||
label: "learning"
|
||||
text: "Explanations and Learning"
|
||||
- id: "tsk_typs_User_fav_7"
|
||||
label: "analysis"
|
||||
text: "Analysis and Processing"
|
||||
- id: "tsk_typs_User_fav_8"
|
||||
label: "automation"
|
||||
text: "Automation and Productivity"
|
||||
- id: "tsk_typs_User_fav_9"
|
||||
label: "other"
|
||||
text: "Other"
|
||||
open_ended_id: "tsk_typs_User_fav_9_TEXT"
|
||||
score_range: [0, 1]
|
||||
format: "Multiple Selection"
|
||||
calculation: "multiple_selection"
|
||||
response_options:
|
||||
"1": "Selected"
|
||||
"0": "Not Selected"
|
||||
output: "task_types_favorite_ai"
|
||||
reference: "self"
|
||||
36
config/questionnaires/trust.yaml
Normal file
36
config/questionnaires/trust.yaml
Normal file
@ -0,0 +1,36 @@
|
||||
questionnaire: "trust"
|
||||
scales:
|
||||
- name: "trust_favorite_ai"
|
||||
label: "Trust in favorite AI system"
|
||||
items:
|
||||
- id: "trust_User_fav_1"
|
||||
text: "I am confident in [favorite AI]. I feel that it works well."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_2"
|
||||
text: "The outputs of [favorite AI] are very predictable."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_3"
|
||||
text: "[favorite AI] is very reliable. I can count on it to be correct all the time."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_4"
|
||||
text: "I feel safe that when I rely on [favorite AI] I will get the right answers."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_5"
|
||||
text: "[favorite AI] is efficient in that it works very quickly."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_6"
|
||||
text: "[favorite AI] can perform the task better than a novice human user."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_7"
|
||||
text: "I like using [favorite AI] for decision making."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_8"
|
||||
text: "I trust [favorite AI]"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 5 = strongly agree"
|
||||
output: "trust_favorite_ai"
|
||||
reference: "Perrig, S. A., Scharowski, N., & Brühlmann, F. (2023)"
|
||||
retain_single_items: true
|
||||
39
config/questionnaires/two_part_trust.yaml
Normal file
39
config/questionnaires/two_part_trust.yaml
Normal file
@ -0,0 +1,39 @@
|
||||
questionnaire: "trust"
|
||||
scales:
|
||||
- name: "trust_competence"
|
||||
label: "Trust in favorite AI system"
|
||||
items:
|
||||
- id: "trust_User_fav_1"
|
||||
text: "I am confident in [favorite AI]. I feel that it works well."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_5"
|
||||
text: "[favorite AI] is efficient in that it works very quickly."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_6"
|
||||
text: "[favorite AI] can perform the task better than a novice human user."
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 5 = strongly agree"
|
||||
output: "trust_competence"
|
||||
reference: "Perrig, S. A., Scharowski, N., & Brühlmann, F. (2023), split in 2 parts"
|
||||
|
||||
- name: "trust_dependability"
|
||||
label: "Trust in favorite AI system"
|
||||
items:
|
||||
- id: "trust_User_fav_3"
|
||||
text: "[favorite AI] is very reliable. I can count on it to be correct all the time."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_4"
|
||||
text: "I feel safe that when I rely on [favorite AI] I will get the right answers."
|
||||
inverse: false
|
||||
- id: "trust_User_fav_8"
|
||||
text: "I trust [favorite AI]"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = strongly disagree, 5 = strongly agree"
|
||||
output: "trust_dependability"
|
||||
reference: "Perrig, S. A., Scharowski, N., & Brühlmann, F. (2023), split in 2 parts"
|
||||
27
config/questionnaires/us_voting_and_mood.yaml
Normal file
27
config/questionnaires/us_voting_and_mood.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
questionnaire: "us_voting_and_mood"
|
||||
scales:
|
||||
- name: "voting_decision"
|
||||
label: "US Presidential Vote (Voluntary)"
|
||||
items:
|
||||
- id: "US_voting"
|
||||
text: "Due to the current events we are interested in your vote, your answer is voluntary!"
|
||||
calculation: "categorical"
|
||||
response_options:
|
||||
"1": "I voted for Trump"
|
||||
"2": "I voted for Harris"
|
||||
"3": "I did not vote!"
|
||||
output: "voting_decision"
|
||||
reference: "self"
|
||||
|
||||
- name: "voting_mood"
|
||||
label: "Self-Reported Mood during the US voting"
|
||||
items:
|
||||
- id: "mood_1"
|
||||
text: "Please indicate the mood you had while completing the questionnaire on a scale from 1 to 7. 1 = Very negative 7 = Very positive - My mood is ..."
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "Likert"
|
||||
calculation: "response"
|
||||
response_options: "1 = Very negative, 7 = Very positive"
|
||||
output: "voting_mood"
|
||||
reference: "self"
|
||||
27
config/questionnaires/usage_and_experience.yaml
Normal file
27
config/questionnaires/usage_and_experience.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
questionnaire: "usage_and_experience"
|
||||
scales:
|
||||
- name: "user"
|
||||
label: "User of a language-based AI system"
|
||||
items:
|
||||
- id: "use"
|
||||
text: "Are you currently using language-based AI systems?"
|
||||
format: "Boolean"
|
||||
calculation: "boolean"
|
||||
response_options:
|
||||
"1": true
|
||||
"2": false
|
||||
output: "user"
|
||||
reference: "self"
|
||||
|
||||
- name: "open_to_use"
|
||||
label: "Openness to Using AI Systems in the Future"
|
||||
items:
|
||||
- id: "open_to_use"
|
||||
text: "Are you open to use or plan to use language-based AI systems in the next couple of months?"
|
||||
inverse: false
|
||||
calculation: "boolean"
|
||||
response_options:
|
||||
"1": true
|
||||
"2": false
|
||||
output: "open_to_use"
|
||||
reference: "self"
|
||||
27
config/questionnaires/usage_frequency.yaml
Normal file
27
config/questionnaires/usage_frequency.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
questionnaire: "usage_frequency"
|
||||
scales:
|
||||
- name: "general_ai_usage_frequency"
|
||||
label: "General usage frequency of AI systems"
|
||||
items:
|
||||
- id: "use_freq_User"
|
||||
text: "General usage frequency of AI systems"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "bipolar"
|
||||
calculation: "response"
|
||||
response_options: "1 = never, 7 = several times a day"
|
||||
output: "general_ai_usage_frequency"
|
||||
reference: "self"
|
||||
|
||||
- name: "favorite_ai_usage_frequency"
|
||||
label: "Usage frequency of favorite AI tool"
|
||||
items:
|
||||
- id: "use_freq_User_fav"
|
||||
text: "Usage frequency of favorite AI tool"
|
||||
inverse: false
|
||||
score_range: [1, 7]
|
||||
format: "bipolar"
|
||||
calculation: "response"
|
||||
response_options: "1 = never, 7 = several times a day"
|
||||
output: "favorite_ai_usage_frequency"
|
||||
reference: "self"
|
||||
27
config/questionnaires/usefulness.yaml
Normal file
27
config/questionnaires/usefulness.yaml
Normal file
@ -0,0 +1,27 @@
|
||||
questionnaire: "usefulness"
|
||||
scales:
|
||||
- name: "usefulness_favorite_ai"
|
||||
label: "Perceived Usefulness of Favorite AI"
|
||||
items:
|
||||
- id: "usefulnss_User_fav_1"
|
||||
text: "How useful is (piped fav AI) for assisting you with this/these task(s)?"
|
||||
inverse: false
|
||||
score_range: [0, 100]
|
||||
format: "slider"
|
||||
calculation: "response"
|
||||
response_options: "0 = Not useful at all, 100 = Extremely useful"
|
||||
output: "usefulness_favorite_ai"
|
||||
reference: "self"
|
||||
|
||||
- name: "usefulness_no_user"
|
||||
label: "Perceived Usefulness of AI Systems"
|
||||
items:
|
||||
- id: "usefulnss_noUser_1"
|
||||
text: "How useful do you perceive AI systems?"
|
||||
inverse: false
|
||||
score_range: [0, 100]
|
||||
format: "slider"
|
||||
calculation: "response"
|
||||
response_options: "0 = Not useful at all, 100 = Extremely useful"
|
||||
output: "usefulness_no_user"
|
||||
reference: "self"
|
||||
54
config/questionnaires/willingness_to_delegate.yaml
Normal file
54
config/questionnaires/willingness_to_delegate.yaml
Normal file
@ -0,0 +1,54 @@
|
||||
questionnaire: "willingness_to_delegate"
|
||||
scales:
|
||||
- name: "willingness_to_delegate"
|
||||
label: "Willingness to Delegate Tasks to AI on Your Behalf"
|
||||
items:
|
||||
- id: "delg_behalf_1"
|
||||
text: "Post public updates on social media for you."
|
||||
inverse: false
|
||||
- id: "delg_behalf_2"
|
||||
text: "Reply to generic messages (e.g., appointment confirmations)."
|
||||
inverse: false
|
||||
- id: "delg_behalf_3"
|
||||
text: "Reply to private messages from friends or family."
|
||||
inverse: false
|
||||
- id: "delg_behalf_4"
|
||||
text: "Choose topics or articles you share on social media."
|
||||
inverse: false
|
||||
- id: "delg_behalf_5"
|
||||
text: "Recommend potential dating partners."
|
||||
inverse: false
|
||||
- id: "delg_behalf_6"
|
||||
text: "Start conversations with potential dating partners."
|
||||
inverse: false
|
||||
- id: "delg_behalf_7"
|
||||
text: "Make final choices about whom you date."
|
||||
inverse: false
|
||||
- id: "delg_behalf_8"
|
||||
text: "Schedule medical appointments for you."
|
||||
inverse: false
|
||||
- id: "delg_behalf_9"
|
||||
text: "Choose which doctor or specialist you should see."
|
||||
inverse: false
|
||||
- id: "delg_behalf_10"
|
||||
text: "Summarize your health information for you."
|
||||
inverse: false
|
||||
- id: "delg_behalf_11"
|
||||
text: "Provide you with recommendations on treating minor health issues."
|
||||
inverse: false
|
||||
- id: "delg_behalf_12"
|
||||
text: "Decide whether you should seek medical care for a symptom."
|
||||
inverse: false
|
||||
- id: "delg_behalf_13"
|
||||
text: "Choose which treatment option you should follow for a diagnosed condition."
|
||||
inverse: false
|
||||
- id: "delg_behalf_14"
|
||||
text: "Make emergency medical decisions for you if you are unable to do so."
|
||||
inverse: false
|
||||
score_range: [1, 6]
|
||||
format: "Likert"
|
||||
calculation: "mean"
|
||||
response_options: "1 = Definitely not, 2 = Probably not, 3 = Not sure, 4 = Probably yes, 5 = Definitely yes, 6 = Not applicable to me"
|
||||
missing_response_option: ["6"]
|
||||
output: "willingness_to_delegate"
|
||||
reference: "self"
|
||||
13
config/questionnaires/willingness_to_delegate_change.yaml
Normal file
13
config/questionnaires/willingness_to_delegate_change.yaml
Normal file
@ -0,0 +1,13 @@
|
||||
questionnaire: "willingness_to_delegate_change"
|
||||
scales:
|
||||
- name: "willingness_to_delegate_change"
|
||||
label: "Perceived Change in Willingness to Delegate Tasks to AI"
|
||||
items:
|
||||
- id: "perc_chng_willg_delg"
|
||||
text: "Compared to when this study started, how has your willingness to delegate tasks to AI changed?"
|
||||
score_range: [1, 5]
|
||||
format: "Multiple Choice"
|
||||
calculation: "response"
|
||||
response_options: "1 = Definitely not, 2 = Probably not, 3 = Not sure, 4 = Probably yes, 5 = Definitely yes"
|
||||
output: "willingness_to_delegate_change"
|
||||
reference: "self"
|
||||
14
config/questionnaires/willingness_to_delegate_future.yaml
Normal file
14
config/questionnaires/willingness_to_delegate_future.yaml
Normal file
@ -0,0 +1,14 @@
|
||||
questionnaire: "willingness_to_delegate_future"
|
||||
scales:
|
||||
- name: "willingness_to_delegate_future"
|
||||
label: "Future Willingness to Delegate More Tasks to AI"
|
||||
items:
|
||||
- id: "delg_future"
|
||||
text: "In the future, do you think you will let AI do more and more tasks for you?"
|
||||
inverse: false
|
||||
score_range: [1, 5]
|
||||
format: "Multiple Choice"
|
||||
calculation: "response"
|
||||
response_options: "1 = Definitely not, 2 = Probably not, 3 = Not sure, 4 = Probably yes, 5 = Definitely yes"
|
||||
output: "willingness_to_delegate_future"
|
||||
reference: "self"
|
||||
177
config/waves/wave1.yaml
Normal file
177
config/waves/wave1.yaml
Normal file
@ -0,0 +1,177 @@
|
||||
wave: 1
|
||||
participant_id_column: "subj_id"
|
||||
|
||||
questionnaires:
|
||||
- name: "usage_and_experience"
|
||||
path: "usage_and_experience.yaml"
|
||||
- name: "apple_use"
|
||||
path: "apple_use.yaml"
|
||||
- name: "attitudes"
|
||||
path: "attitudes.yaml"
|
||||
- name: "attitudes_toward_ai_decisions"
|
||||
path: "attitudes_toward_ai_decisions.yaml"
|
||||
- name: "attitudes_toward_disclosure"
|
||||
path: "attitudes_toward_disclosure.yaml"
|
||||
- name: "attitudes_usage"
|
||||
path: "attitudes_usage.yaml"
|
||||
- name: "bigfive"
|
||||
path: "bigfive.yaml"
|
||||
- name: "context_of_use"
|
||||
path: "context_of_use.yaml"
|
||||
- name: "cognitiv_selfesteem"
|
||||
path: "cognitiv_selfesteem.yaml"
|
||||
- name: "closeness"
|
||||
path: "closeness.yaml"
|
||||
- name: "credibility"
|
||||
path: "credibility.yaml"
|
||||
- name: "creepiness"
|
||||
path: "creepiness.yaml"
|
||||
- name: "delegation_comfort"
|
||||
path: "delegation_comfort.yaml"
|
||||
- name: "demographics"
|
||||
path: "demographics.yaml"
|
||||
- name: "enjoyment"
|
||||
path: "enjoyment.yaml"
|
||||
- name: "favourite_ai"
|
||||
path: "favorite_ai.yaml"
|
||||
- name: "general_experience_ai"
|
||||
path: "general_experience_ai.yaml"
|
||||
- name: "intention_usage"
|
||||
path: "intention_usage.yaml"
|
||||
- name: "knowledge"
|
||||
path: "knowledge.yaml"
|
||||
- name: "loneliness"
|
||||
path: "loneliness.yaml"
|
||||
- name: "modality"
|
||||
path: "modality.yaml"
|
||||
- name: "needs_satisfaction"
|
||||
path: "needs_satisfaction.yaml"
|
||||
- name: "perceived_anthropomorphism"
|
||||
path: "perceived_anthropomorphism.yaml"
|
||||
- name: "perceived_intelligence"
|
||||
path: "perceived_intelligence.yaml"
|
||||
- name: "perceived_reliance_on_ai"
|
||||
path: "perceived_reliance_on_ai.yaml"
|
||||
- name: "perceived_role_of_ai"
|
||||
path: "perceived_role_of_ai.yaml"
|
||||
- name: "perceived_task_characteristics_delegation"
|
||||
path: "perceived_task_characteristics_delegation.yaml"
|
||||
- name: "perception_tool_actor"
|
||||
path: "perception_tool_actor.yaml"
|
||||
- name: "preferred_level_of_delegation"
|
||||
path: "preferred_level_of_delegation.yaml"
|
||||
- name: "reason_for_not_using_ai"
|
||||
path: "reason_for_not_using_ai.yaml"
|
||||
- name: "risk_opportunity_perception"
|
||||
path: "risk_opportunity_perception.yaml"
|
||||
- name: "self_efficacy"
|
||||
path: "self_efficacy.yaml"
|
||||
- name: "social_presence"
|
||||
path: "social_presence.yaml"
|
||||
- name: "task_types"
|
||||
path: "task_types.yaml"
|
||||
- name: "trust"
|
||||
path: "trust.yaml"
|
||||
- name: "two_part_trust"
|
||||
path: "two_part_trust.yaml"
|
||||
- name: "usage_frequency"
|
||||
path: "usage_frequency.yaml"
|
||||
- name: "usefulness"
|
||||
path: "usefulness.yaml"
|
||||
|
||||
subgroup_scales:
|
||||
user: "all"
|
||||
open_to_use: "all"
|
||||
apple_use: "all"
|
||||
attitudes: "all"
|
||||
attitudes_toward_ai_decisions: "all"
|
||||
attitudes_toward_disclosure: "all"
|
||||
attitudes_usage: "all"
|
||||
context_of_use_user: "users"
|
||||
context_of_use_no_user: "nonusers"
|
||||
cognitive_selfesteem_thinking: "all"
|
||||
cognitive_selfesteem_memory: "all"
|
||||
cognitive_selfesteem_transactive_memory: "all"
|
||||
closeness_favorite_ai: "users"
|
||||
creepiness_favorite_ai_user: "users"
|
||||
credibility_favorite_ai: "users"
|
||||
credibility_ai_no_user: "nonusers"
|
||||
delegation_comfort: "all"
|
||||
enjoyment_favorite_ai_user: "users"
|
||||
choice_favorite_ai_user: "users"
|
||||
choice_favorite_ai_no_user: "nonusers"
|
||||
general_experience_ai: "all"
|
||||
intention_use_favorite_ai: "users"
|
||||
intention_use_no_user: "nonusers"
|
||||
knowledge: "all"
|
||||
loneliness: "all"
|
||||
modality_favorite_ai: "users"
|
||||
perceived_anthropomorphism_favorite_ai: "users"
|
||||
perceived_anthropomorphism_ai_no_user: "nonusers"
|
||||
perceived_intelligence_favorite_ai: "users"
|
||||
perceived_intelligence_ai_no_user: "nonusers"
|
||||
perceived_reliance_on_ai_user: "users"
|
||||
perceived_role_of_ai_favorite_ai: "users"
|
||||
perceived_role_of_ai_no_user: "nonusers"
|
||||
perception_tool_actor_favorite_ai: "users"
|
||||
perception_tool_actor_no_user: "nonusers"
|
||||
preferred_level_of_delegation: "all"
|
||||
reason_for_not_using_ai: "nonusers"
|
||||
risk_opportunity_perception: "all"
|
||||
self_efficacy_without_ai_creativity: "all"
|
||||
self_efficacy_without_ai_problem_solving: "all"
|
||||
self_efficacy_with_ai_creativity: "all"
|
||||
self_efficacy_with_ai_problem_solving: "all"
|
||||
social_presence_sense_favorite_ai: "users"
|
||||
task_types_general: "users"
|
||||
task_types_favorite_ai: "users"
|
||||
trust_favorite_ai: "users"
|
||||
trust_competence: "users"
|
||||
trust_dependability: "users"
|
||||
general_ai_usage_frequency: "all"
|
||||
favorite_ai_usage_frequency: "users"
|
||||
usefulness_favorite_ai: "users"
|
||||
usefulness_no_user: "nonusers"
|
||||
|
||||
skip_scales:
|
||||
- creepiness_ai_no_user
|
||||
- social_presence_being_favorite_ai
|
||||
|
||||
composite_scales:
|
||||
cognitive_selfesteem_overall:
|
||||
scales: [cognitive_selfesteem_thinking, cognitive_selfesteem_memory, cognitive_selfesteem_transactive_memory]
|
||||
method: "weighted_mean"
|
||||
subgroup: "all"
|
||||
keep_subscales: true
|
||||
credibility_overall:
|
||||
scales: [credibility_favorite_ai, credibility_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
favorite_ai_system_overall:
|
||||
scales: [choice_favorite_ai_user, choice_favorite_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
intention_use_overall:
|
||||
scales: [intention_use_favorite_ai, intention_use_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_anthropomorphism_overall:
|
||||
scales: [perceived_anthropomorphism_favorite_ai, perceived_anthropomorphism_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_intelligence_overall:
|
||||
scales: [perceived_intelligence_favorite_ai, perceived_intelligence_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_role_of_ai_overall:
|
||||
scales: [perceived_role_of_ai_favorite_ai, perceived_role_of_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perception_tool_actor_overall:
|
||||
scales: [perception_tool_actor_favorite_ai, perception_tool_actor_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
usefulness_overall:
|
||||
scales: [usefulness_favorite_ai, usefulness_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
171
config/waves/wave2.yaml
Normal file
171
config/waves/wave2.yaml
Normal file
@ -0,0 +1,171 @@
|
||||
wave: 2
|
||||
participant_id_column: "subj_id"
|
||||
|
||||
questionnaires:
|
||||
- name: "usage_and_experience"
|
||||
path: "usage_and_experience.yaml"
|
||||
- name: "attitudes"
|
||||
path: "attitudes.yaml"
|
||||
- name: "attitudes_toward_ai_decisions"
|
||||
path: "attitudes_toward_ai_decisions.yaml"
|
||||
- name: "attitudes_toward_disclosure"
|
||||
path: "attitudes_toward_disclosure.yaml"
|
||||
- name: "attitudes_usage"
|
||||
path: "attitudes_usage.yaml"
|
||||
- name: "context_of_use"
|
||||
path: "context_of_use.yaml"
|
||||
- name: "cognitiv_selfesteem"
|
||||
path: "cognitiv_selfesteem.yaml"
|
||||
- name: "closeness"
|
||||
path: "closeness.yaml"
|
||||
- name: "credibility"
|
||||
path: "credibility.yaml"
|
||||
- name: "creepiness"
|
||||
path: "creepiness.yaml"
|
||||
- name: "delegation_comfort"
|
||||
path: "delegation_comfort.yaml"
|
||||
- name: "effects_on_work"
|
||||
path: "effects_on_work.yaml"
|
||||
- name: "enjoyment"
|
||||
path: "enjoyment.yaml"
|
||||
- name: "favourite_ai"
|
||||
path: "favorite_ai.yaml"
|
||||
- name: "general_experience_ai"
|
||||
path: "general_experience_ai.yaml"
|
||||
- name: "intention_usage"
|
||||
path: "intention_usage.yaml"
|
||||
- name: "knowledge"
|
||||
path: "knowledge.yaml"
|
||||
- name: "loneliness"
|
||||
path: "loneliness.yaml"
|
||||
- name: "modality"
|
||||
path: "modality.yaml"
|
||||
- name: "needs_satisfaction"
|
||||
path: "needs_satisfaction.yaml"
|
||||
- name: "perceived_anthropomorphism"
|
||||
path: "perceived_anthropomorphism.yaml"
|
||||
- name: "perceived_intelligence"
|
||||
path: "perceived_intelligence.yaml"
|
||||
- name: "perceived_reliance_on_ai"
|
||||
path: "perceived_reliance_on_ai.yaml"
|
||||
- name: "perceived_role_of_ai"
|
||||
path: "perceived_role_of_ai.yaml"
|
||||
- name: "perception_tool_actor"
|
||||
path: "perception_tool_actor.yaml"
|
||||
- name: "preferred_level_of_delegation"
|
||||
path: "preferred_level_of_delegation.yaml"
|
||||
- name: "reason_for_not_using_ai"
|
||||
path: "reason_for_not_using_ai.yaml"
|
||||
- name: "risk_opportunity_perception"
|
||||
path: "risk_opportunity_perception.yaml"
|
||||
- name: "self_efficacy"
|
||||
path: "self_efficacy.yaml"
|
||||
- name: "social_presence"
|
||||
path: "social_presence.yaml"
|
||||
- name: "task_types"
|
||||
path: "task_types.yaml"
|
||||
- name: "trust"
|
||||
path: "trust.yaml"
|
||||
- name: "two_part_trust"
|
||||
path: "two_part_trust.yaml"
|
||||
- name: "us_voting_and_mood"
|
||||
path: "us_voting_and_mood.yaml"
|
||||
- name: "usage_frequency"
|
||||
path: "usage_frequency.yaml"
|
||||
- name: "usefulness"
|
||||
path: "usefulness.yaml"
|
||||
|
||||
subgroup_scales:
|
||||
user: "all"
|
||||
open_to_use: "all"
|
||||
attitudes: "all"
|
||||
attitudes_toward_ai_decisions: "all"
|
||||
attitudes_toward_disclosure: "all"
|
||||
attitudes_usage: "all"
|
||||
context_of_use_user: "users"
|
||||
cognitive_selfesteem_thinking: "all"
|
||||
cognitive_selfesteem_memory: "all"
|
||||
cognitive_selfesteem_transactive_memory: "all"
|
||||
closeness_favorite_ai: "users"
|
||||
creepiness_favorite_ai_user: "users"
|
||||
credibility_favorite_ai: "users"
|
||||
credibility_ai_no_user: "nonusers"
|
||||
delegation_comfort: "all"
|
||||
effects_on_work: "all"
|
||||
enjoyment_favorite_ai_user: "users"
|
||||
choice_favorite_ai_user: "users"
|
||||
choice_favorite_ai_no_user: "nonusers"
|
||||
general_experience_ai: "all"
|
||||
intention_use_favorite_ai: "users"
|
||||
intention_use_no_user: "nonusers"
|
||||
knowledge: "all"
|
||||
loneliness: "all"
|
||||
modality_favorite_ai: "users"
|
||||
perceived_anthropomorphism_favorite_ai: "users"
|
||||
perceived_anthropomorphism_ai_no_user: "nonusers"
|
||||
perceived_intelligence_favorite_ai: "users"
|
||||
perceived_intelligence_ai_no_user: "nonusers"
|
||||
perceived_reliance_on_ai_user: "users"
|
||||
perceived_role_of_ai_favorite_ai: "users"
|
||||
perceived_role_of_ai_no_user: "nonusers"
|
||||
perception_tool_actor_favorite_ai: "users"
|
||||
perception_tool_actor_no_user: "nonusers"
|
||||
preferred_level_of_delegation: "all"
|
||||
reason_for_not_using_ai: "nonusers"
|
||||
risk_opportunity_perception: "all"
|
||||
self_efficacy_without_ai_creativity: "all"
|
||||
self_efficacy_without_ai_problem_solving: "all"
|
||||
self_efficacy_with_ai_creativity: "all"
|
||||
self_efficacy_with_ai_problem_solving: "all"
|
||||
social_presence_sense_favorite_ai: "users"
|
||||
task_types_general: "users"
|
||||
task_types_favorite_ai: "users"
|
||||
trust_favorite_ai: "users"
|
||||
trust_competence: "users"
|
||||
trust_dependability: "users"
|
||||
general_ai_usage_frequency: "all"
|
||||
favorite_ai_usage_frequency: "users"
|
||||
usefulness_favorite_ai: "users"
|
||||
usefulness_no_user: "nonusers"
|
||||
voting_decision: "all"
|
||||
voting_mood: "all"
|
||||
|
||||
skip_scales:
|
||||
- context_of_use_no_user
|
||||
- creepiness_ai_no_user
|
||||
- social_presence_being_favorite_ai
|
||||
|
||||
composite_scales:
|
||||
cognitive_selfesteem_overall:
|
||||
scales: [cognitive_selfesteem_thinking, cognitive_selfesteem_memory, cognitive_selfesteem_transactive_memory]
|
||||
method: "weighted_mean"
|
||||
subgroup: "all"
|
||||
keep_subscales: true
|
||||
credibility_overall:
|
||||
scales: [credibility_favorite_ai, credibility_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
favorite_ai_system_overall:
|
||||
scales: [choice_favorite_ai_user, choice_favorite_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
intention_use_overall:
|
||||
scales: [intention_use_favorite_ai, intention_use_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_anthropomorphism_overall:
|
||||
scales: [perceived_anthropomorphism_favorite_ai, perceived_anthropomorphism_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_intelligence_overall:
|
||||
scales: [perceived_intelligence_favorite_ai, perceived_intelligence_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perception_tool_actor_overall:
|
||||
scales: [perception_tool_actor_favorite_ai, perception_tool_actor_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
usefulness_overall:
|
||||
scales: [usefulness_favorite_ai, usefulness_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
179
config/waves/wave3.yaml
Normal file
179
config/waves/wave3.yaml
Normal file
@ -0,0 +1,179 @@
|
||||
wave: 3
|
||||
participant_id_column: "subj_id"
|
||||
|
||||
questionnaires:
|
||||
- name: "usage_and_experience"
|
||||
path: "usage_and_experience.yaml"
|
||||
- name: "attitudes"
|
||||
path: "attitudes.yaml"
|
||||
- name: "attitudes_toward_ai_decisions"
|
||||
path: "attitudes_toward_ai_decisions.yaml"
|
||||
- name: "attitudes_toward_disclosure"
|
||||
path: "attitudes_toward_disclosure.yaml"
|
||||
- name: "attitudes_usage"
|
||||
path: "attitudes_usage.yaml"
|
||||
- name: "context_of_use"
|
||||
path: "context_of_use.yaml"
|
||||
- name: "cognitiv_selfesteem"
|
||||
path: "cognitiv_selfesteem.yaml"
|
||||
- name: "closeness"
|
||||
path: "closeness.yaml"
|
||||
- name: "credibility"
|
||||
path: "credibility.yaml"
|
||||
- name: "creepiness"
|
||||
path: "creepiness.yaml"
|
||||
- name: "delegation_comfort"
|
||||
path: "delegation_comfort.yaml"
|
||||
- name: "effects_on_work"
|
||||
path: "effects_on_work.yaml"
|
||||
- name: "enjoyment"
|
||||
path: "enjoyment.yaml"
|
||||
- name: "favourite_ai"
|
||||
path: "favorite_ai.yaml"
|
||||
- name: "general_experience_ai"
|
||||
path: "general_experience_ai.yaml"
|
||||
- name: "intention_usage"
|
||||
path: "intention_usage.yaml"
|
||||
- name: "knowledge"
|
||||
path: "knowledge.yaml"
|
||||
- name: "loneliness"
|
||||
path: "loneliness.yaml"
|
||||
- name: "modality"
|
||||
path: "modality.yaml"
|
||||
- name: "needs"
|
||||
path: "needs.yaml"
|
||||
- name: "needs_satisfaction"
|
||||
path: "needs_satisfaction.yaml"
|
||||
- name: "perceived_anthropomorphism"
|
||||
path: "perceived_anthropomorphism.yaml"
|
||||
- name: "perceived_intelligence"
|
||||
path: "perceived_intelligence.yaml"
|
||||
- name: "perceived_reliance_on_ai"
|
||||
path: "perceived_reliance_on_ai.yaml"
|
||||
- name: "perceived_role_of_ai"
|
||||
path: "perceived_role_of_ai.yaml"
|
||||
- name: "perception_tool_actor"
|
||||
path: "perception_tool_actor.yaml"
|
||||
- name: "personality_specific_traits"
|
||||
path: "personality_specific_traits.yaml"
|
||||
- name: "preferred_level_of_delegation"
|
||||
path: "preferred_level_of_delegation.yaml"
|
||||
- name: "reason_for_not_using_ai"
|
||||
path: "reason_for_not_using_ai.yaml"
|
||||
- name: "risk_opportunity_perception"
|
||||
path: "risk_opportunity_perception.yaml"
|
||||
- name: "self_efficacy"
|
||||
path: "self_efficacy.yaml"
|
||||
- name: "social_presence"
|
||||
path: "social_presence.yaml"
|
||||
- name: "task_types"
|
||||
path: "task_types.yaml"
|
||||
- name: "trust"
|
||||
path: "trust.yaml"
|
||||
- name: "two_part_trust"
|
||||
path: "two_part_trust.yaml"
|
||||
- name: "usage_frequency"
|
||||
path: "usage_frequency.yaml"
|
||||
- name: "usefulness"
|
||||
path: "usefulness.yaml"
|
||||
|
||||
subgroup_scales:
|
||||
user: "all"
|
||||
open_to_use: "all"
|
||||
attitudes: "all"
|
||||
attitudes_toward_ai_decisions: "all"
|
||||
attitudes_toward_disclosure: "all"
|
||||
attitudes_usage: "all"
|
||||
context_of_use_user: "users"
|
||||
cognitive_selfesteem_thinking: "all"
|
||||
cognitive_selfesteem_memory: "all"
|
||||
cognitive_selfesteem_transactive_memory: "all"
|
||||
closeness_favorite_ai: "users"
|
||||
credibility_favorite_ai: "users"
|
||||
credibility_ai_no_user: "nonusers"
|
||||
creepiness_favorite_ai_user: "users"
|
||||
creepiness_ai_no_user: "nonusers"
|
||||
delegation_comfort: "all"
|
||||
effects_on_work: "all"
|
||||
enjoyment_favorite_ai_user: "users"
|
||||
choice_favorite_ai_user: "users"
|
||||
choice_favorite_ai_no_user: "nonusers"
|
||||
general_experience_ai: "all"
|
||||
intention_use_favorite_ai: "users"
|
||||
intention_use_no_user: "nonusers"
|
||||
knowledge: "all"
|
||||
loneliness: "all"
|
||||
modality_favorite_ai: "users"
|
||||
need_to_belong: "all"
|
||||
need_for_cognition: "all"
|
||||
need_for_closure: "all"
|
||||
perceived_anthropomorphism_favorite_ai: "users"
|
||||
perceived_anthropomorphism_ai_no_user: "nonusers"
|
||||
perceived_intelligence_favorite_ai: "users"
|
||||
perceived_intelligence_ai_no_user: "nonusers"
|
||||
perceived_reliance_on_ai_user: "users"
|
||||
perceived_role_of_ai_favorite_ai: "users"
|
||||
perceived_role_of_ai_no_user: "nonusers"
|
||||
perception_tool_actor_favorite_ai: "users"
|
||||
perception_tool_actor_no_user: "nonusers"
|
||||
personality_specific_traits: "all"
|
||||
preferred_level_of_delegation: "all"
|
||||
reason_for_not_using_ai: "nonusers"
|
||||
risk_opportunity_perception: "all"
|
||||
self_efficacy_without_ai_creativity: "all"
|
||||
self_efficacy_without_ai_problem_solving: "all"
|
||||
self_efficacy_with_ai_creativity: "all"
|
||||
self_efficacy_with_ai_problem_solving: "all"
|
||||
social_presence_sense_favorite_ai: "users"
|
||||
task_types_general: "users"
|
||||
task_types_favorite_ai: "users"
|
||||
trust_favorite_ai: "users"
|
||||
trust_competence: "users"
|
||||
trust_dependability: "users"
|
||||
general_ai_usage_frequency: "all"
|
||||
favorite_ai_usage_frequency: "users"
|
||||
usefulness_favorite_ai: "users"
|
||||
usefulness_no_user: "nonusers"
|
||||
|
||||
skip_scales:
|
||||
- context_of_use_no_user
|
||||
- social_presence_being_favorite_ai
|
||||
|
||||
composite_scales:
|
||||
cognitive_selfesteem_overall:
|
||||
scales: [cognitive_selfesteem_thinking, cognitive_selfesteem_memory, cognitive_selfesteem_transactive_memory]
|
||||
method: "weighted_mean"
|
||||
subgroup: "all"
|
||||
keep_subscales: true
|
||||
credibility_overall:
|
||||
scales: [credibility_favorite_ai, credibility_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
creepiness_overall:
|
||||
scales: [creepiness_favorite_ai_user, creepiness_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
favorite_ai_system_overall:
|
||||
scales: [choice_favorite_ai_user, choice_favorite_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
intention_use_overall:
|
||||
scales: [intention_use_favorite_ai, intention_use_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_anthropomorphism_overall:
|
||||
scales: [perceived_anthropomorphism_favorite_ai, perceived_anthropomorphism_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_intelligence_overall:
|
||||
scales: [perceived_intelligence_favorite_ai, perceived_intelligence_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perception_tool_actor_overall:
|
||||
scales: [perception_tool_actor_favorite_ai, perception_tool_actor_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
usefulness_overall:
|
||||
scales: [usefulness_favorite_ai, usefulness_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
215
config/waves/wave4.yaml
Normal file
215
config/waves/wave4.yaml
Normal file
@ -0,0 +1,215 @@
|
||||
wave: 4
|
||||
participant_id_column: "subj_id"
|
||||
|
||||
questionnaires:
|
||||
- name: "usage_and_experience"
|
||||
path: "usage_and_experience.yaml"
|
||||
- name: "ai_adoption_factors_open_question"
|
||||
path: "ai_adoption_factors_open_question.yaml"
|
||||
- name: "ai_aversion"
|
||||
path: "ai_aversion.yaml"
|
||||
- name: "attitudes"
|
||||
path: "attitudes.yaml"
|
||||
- name: "attitudes_toward_ai_decisions"
|
||||
path: "attitudes_toward_ai_decisions.yaml"
|
||||
- name: "attitudes_toward_disclosure"
|
||||
path: "attitudes_toward_disclosure.yaml"
|
||||
- name: "attitudes_usage"
|
||||
path: "attitudes_usage.yaml"
|
||||
- name: "context_of_use"
|
||||
path: "context_of_use.yaml"
|
||||
- name: "cognitiv_selfesteem"
|
||||
path: "cognitiv_selfesteem.yaml"
|
||||
- name: "closeness"
|
||||
path: "closeness.yaml"
|
||||
- name: "concerns_about_loss_of_autonomy"
|
||||
path: "concerns_about_loss_of_autonomy.yaml"
|
||||
- name: "credibility"
|
||||
path: "credibility.yaml"
|
||||
- name: "creepiness"
|
||||
path: "creepiness.yaml"
|
||||
- name: "delegation_comfort"
|
||||
path: "delegation_comfort.yaml"
|
||||
- name: "distrust_toward_ai_corporations"
|
||||
path: "distrust_toward_ai_corporations.yaml"
|
||||
- name: "ecological_concerns"
|
||||
path: "ecological_concerns.yaml"
|
||||
- name: "effects_on_work"
|
||||
path: "effects_on_work.yaml"
|
||||
- name: "enjoyment"
|
||||
path: "enjoyment.yaml"
|
||||
- name: "ethical_concerns_general"
|
||||
path: "ethical_concerns_general.yaml"
|
||||
- name: "favourite_ai"
|
||||
path: "favorite_ai.yaml"
|
||||
- name: "general_experience_ai"
|
||||
path: "general_experience_ai.yaml"
|
||||
- name: "intention_usage"
|
||||
path: "intention_usage.yaml"
|
||||
- name: "knowledge"
|
||||
path: "knowledge.yaml"
|
||||
- name: "knowledge_how_to_start_using_ai"
|
||||
path: "knowledge_how_to_start_using_ai.yaml"
|
||||
- name: "lack_of_fomo"
|
||||
path: "lack_of_fomo.yaml"
|
||||
- name: "loneliness"
|
||||
path: "loneliness.yaml"
|
||||
- name: "modality"
|
||||
path: "modality.yaml"
|
||||
- name: "needs"
|
||||
path: "needs.yaml"
|
||||
- name: "needs_satisfaction"
|
||||
path: "needs_satisfaction.yaml"
|
||||
- name: "perceived_anthropomorphism"
|
||||
path: "perceived_anthropomorphism.yaml"
|
||||
- name: "perceived_intelligence"
|
||||
path: "perceived_intelligence.yaml"
|
||||
- name: "perceived_reliance_on_ai"
|
||||
path: "perceived_reliance_on_ai.yaml"
|
||||
- name: "perceived_role_ai_should_take"
|
||||
path: "perceived_role_ai_should_take.yaml"
|
||||
- name: "perceived_role_of_ai"
|
||||
path: "perceived_role_of_ai.yaml"
|
||||
- name: "perceived_lack_of_need"
|
||||
path: "perceived_lack_of_need.yaml"
|
||||
- name: "perception_tool_actor"
|
||||
path: "perception_tool_actor.yaml"
|
||||
- name: "personality_specific_traits"
|
||||
path: "personality_specific_traits.yaml"
|
||||
- name: "preference_for_status_quo"
|
||||
path: "preference_for_status_quo.yaml"
|
||||
- name: "preferred_level_of_delegation"
|
||||
path: "preferred_level_of_delegation.yaml"
|
||||
- name: "reason_for_not_using_ai"
|
||||
path: "reason_for_not_using_ai.yaml"
|
||||
- name: "risk_opportunity_perception"
|
||||
path: "risk_opportunity_perception.yaml"
|
||||
- name: "security_concerns"
|
||||
path: "security_concerns.yaml"
|
||||
- name: "self_efficacy"
|
||||
path: "self_efficacy.yaml"
|
||||
- name: "social_presence"
|
||||
path: "social_presence.yaml"
|
||||
- name: "task_types"
|
||||
path: "task_types.yaml"
|
||||
- name: "trust"
|
||||
path: "trust.yaml"
|
||||
- name: "two_part_trust"
|
||||
path: "two_part_trust.yaml"
|
||||
- name: "usage_frequency"
|
||||
path: "usage_frequency.yaml"
|
||||
- name: "usefulness"
|
||||
path: "usefulness.yaml"
|
||||
|
||||
subgroup_scales:
|
||||
user: "all"
|
||||
open_to_use: "all"
|
||||
ai_adoption_factors_open_question: "nonusers"
|
||||
ai_aversion_no_user: "nonusers"
|
||||
attitudes: "all"
|
||||
attitudes_toward_ai_decisions: "all"
|
||||
attitudes_toward_disclosure: "all"
|
||||
attitudes_usage: "all"
|
||||
context_of_use_user: "users"
|
||||
cognitive_selfesteem_thinking: "all"
|
||||
cognitive_selfesteem_memory: "all"
|
||||
cognitive_selfesteem_transactive_memory: "all"
|
||||
closeness_favorite_ai: "users"
|
||||
concerns_about_loss_of_autonomy_no_user: "nonusers"
|
||||
credibility_favorite_ai: "users"
|
||||
credibility_ai_no_user: "nonusers"
|
||||
creepiness_favorite_ai_user: "users"
|
||||
creepiness_ai_no_user: "nonusers"
|
||||
delegation_comfort: "all"
|
||||
distrust_toward_ai_corporations_no_user: "nonusers"
|
||||
ecological_concerns_no_user: "nonusers"
|
||||
effects_on_work: "all"
|
||||
enjoyment_favorite_ai_user: "users"
|
||||
ethical_concerns_general_no_user: "nonusers"
|
||||
choice_favorite_ai_user: "users"
|
||||
choice_favorite_ai_no_user: "nonusers"
|
||||
general_experience_ai: "all"
|
||||
intention_use_favorite_ai: "users"
|
||||
intention_use_no_user: "nonusers"
|
||||
knowledge: "all"
|
||||
knowledge_how_to_start_using_ai_no_user: "nonusers"
|
||||
lack_of_fomo_no_user: "nonusers"
|
||||
loneliness: "all"
|
||||
modality_favorite_ai: "users"
|
||||
need_to_belong: "all"
|
||||
need_for_cognition: "all"
|
||||
need_for_closure: "all"
|
||||
perceived_anthropomorphism_favorite_ai: "users"
|
||||
perceived_anthropomorphism_ai_no_user: "nonusers"
|
||||
perceived_intelligence_favorite_ai: "users"
|
||||
perceived_intelligence_ai_no_user: "nonusers"
|
||||
perceived_lack_of_need_no_user: "nonusers"
|
||||
perceived_reliance_on_ai_user: "users"
|
||||
perceived_role_ai_should_take_no_user: "nonusers"
|
||||
perceived_role_of_ai_favorite_ai: "users"
|
||||
perceived_role_of_ai_no_user: "nonusers"
|
||||
perception_tool_actor_favorite_ai: "users"
|
||||
perception_tool_actor_no_user: "nonusers"
|
||||
personality_specific_traits: "all"
|
||||
preference_for_status_quo_no_user: "nonusers"
|
||||
preferred_level_of_delegation: "all"
|
||||
reason_for_not_using_ai: "nonusers"
|
||||
risk_opportunity_perception: "all"
|
||||
security_concerns_no_user: "nonusers"
|
||||
self_efficacy_without_ai_creativity: "all"
|
||||
self_efficacy_without_ai_problem_solving: "all"
|
||||
self_efficacy_with_ai_creativity: "all"
|
||||
self_efficacy_with_ai_problem_solving: "all"
|
||||
social_presence_sense_favorite_ai: "users"
|
||||
social_presence_being_favorite_ai: "users"
|
||||
task_types_general: "users"
|
||||
task_types_favorite_ai: "users"
|
||||
trust_favorite_ai: "users"
|
||||
trust_competence: "users"
|
||||
trust_dependability: "users"
|
||||
general_ai_usage_frequency: "all"
|
||||
favorite_ai_usage_frequency: "users"
|
||||
usefulness_favorite_ai: "users"
|
||||
usefulness_no_user: "nonusers"
|
||||
|
||||
skip_scales:
|
||||
- context_of_use_no_user
|
||||
|
||||
composite_scales:
|
||||
cognitive_selfesteem_overall:
|
||||
scales: [cognitive_selfesteem_thinking, cognitive_selfesteem_memory, cognitive_selfesteem_transactive_memory]
|
||||
method: "weighted_mean"
|
||||
subgroup: "all"
|
||||
keep_subscales: true
|
||||
credibility_overall:
|
||||
scales: [credibility_favorite_ai, credibility_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
creepiness_overall:
|
||||
scales: [creepiness_favorite_ai_user, creepiness_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
favorite_ai_system_overall:
|
||||
scales: [choice_favorite_ai_user, choice_favorite_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
intention_use_overall:
|
||||
scales: [intention_use_favorite_ai, intention_use_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_anthropomorphism_overall:
|
||||
scales: [perceived_anthropomorphism_favorite_ai, perceived_anthropomorphism_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_intelligence_overall:
|
||||
scales: [perceived_intelligence_favorite_ai, perceived_intelligence_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perception_tool_actor_overall:
|
||||
scales: [perception_tool_actor_favorite_ai, perception_tool_actor_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
usefulness_overall:
|
||||
scales: [usefulness_favorite_ai, usefulness_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
215
config/waves/wave5.yaml
Normal file
215
config/waves/wave5.yaml
Normal file
@ -0,0 +1,215 @@
|
||||
wave: 5
|
||||
participant_id_column: "subj_id"
|
||||
|
||||
questionnaires:
|
||||
- name: "usage_and_experience"
|
||||
path: "usage_and_experience.yaml"
|
||||
- name: "ai_adoption_factors_open_question"
|
||||
path: "ai_adoption_factors_open_question.yaml"
|
||||
- name: "ai_aversion"
|
||||
path: "ai_aversion.yaml"
|
||||
- name: "attitudes"
|
||||
path: "attitudes.yaml"
|
||||
- name: "attitudes_toward_ai_decisions"
|
||||
path: "attitudes_toward_ai_decisions.yaml"
|
||||
- name: "attitudes_toward_disclosure"
|
||||
path: "attitudes_toward_disclosure.yaml"
|
||||
- name: "attitudes_usage"
|
||||
path: "attitudes_usage.yaml"
|
||||
- name: "context_of_use"
|
||||
path: "context_of_use.yaml"
|
||||
- name: "cognitiv_selfesteem"
|
||||
path: "cognitiv_selfesteem.yaml"
|
||||
- name: "closeness"
|
||||
path: "closeness.yaml"
|
||||
- name: "concerns_about_loss_of_autonomy"
|
||||
path: "concerns_about_loss_of_autonomy.yaml"
|
||||
- name: "credibility"
|
||||
path: "credibility.yaml"
|
||||
- name: "creepiness"
|
||||
path: "creepiness.yaml"
|
||||
- name: "delegation_comfort"
|
||||
path: "delegation_comfort.yaml"
|
||||
- name: "distrust_toward_ai_corporations"
|
||||
path: "distrust_toward_ai_corporations.yaml"
|
||||
- name: "ecological_concerns"
|
||||
path: "ecological_concerns.yaml"
|
||||
- name: "effects_on_work"
|
||||
path: "effects_on_work.yaml"
|
||||
- name: "enjoyment"
|
||||
path: "enjoyment.yaml"
|
||||
- name: "ethical_concerns_general"
|
||||
path: "ethical_concerns_general.yaml"
|
||||
- name: "favourite_ai"
|
||||
path: "favorite_ai.yaml"
|
||||
- name: "general_experience_ai"
|
||||
path: "general_experience_ai.yaml"
|
||||
- name: "intention_usage"
|
||||
path: "intention_usage.yaml"
|
||||
- name: "knowledge"
|
||||
path: "knowledge.yaml"
|
||||
- name: "knowledge_how_to_start_using_ai"
|
||||
path: "knowledge_how_to_start_using_ai.yaml"
|
||||
- name: "lack_of_fomo"
|
||||
path: "lack_of_fomo.yaml"
|
||||
- name: "loneliness"
|
||||
path: "loneliness.yaml"
|
||||
- name: "modality"
|
||||
path: "modality.yaml"
|
||||
- name: "needs"
|
||||
path: "needs.yaml"
|
||||
- name: "needs_satisfaction"
|
||||
path: "needs_satisfaction.yaml"
|
||||
- name: "perceived_anthropomorphism"
|
||||
path: "perceived_anthropomorphism.yaml"
|
||||
- name: "perceived_intelligence"
|
||||
path: "perceived_intelligence.yaml"
|
||||
- name: "perceived_reliance_on_ai"
|
||||
path: "perceived_reliance_on_ai.yaml"
|
||||
- name: "perceived_role_ai_should_take"
|
||||
path: "perceived_role_ai_should_take.yaml"
|
||||
- name: "perceived_role_of_ai"
|
||||
path: "perceived_role_of_ai.yaml"
|
||||
- name: "perceived_lack_of_need"
|
||||
path: "perceived_lack_of_need.yaml"
|
||||
- name: "perception_tool_actor"
|
||||
path: "perception_tool_actor.yaml"
|
||||
- name: "personality_specific_traits"
|
||||
path: "personality_specific_traits.yaml"
|
||||
- name: "preference_for_status_quo"
|
||||
path: "preference_for_status_quo.yaml"
|
||||
- name: "preferred_level_of_delegation"
|
||||
path: "preferred_level_of_delegation.yaml"
|
||||
- name: "reason_for_not_using_ai"
|
||||
path: "reason_for_not_using_ai.yaml"
|
||||
- name: "risk_opportunity_perception"
|
||||
path: "risk_opportunity_perception.yaml"
|
||||
- name: "security_concerns"
|
||||
path: "security_concerns.yaml"
|
||||
- name: "self_efficacy"
|
||||
path: "self_efficacy.yaml"
|
||||
- name: "social_presence"
|
||||
path: "social_presence.yaml"
|
||||
- name: "task_types"
|
||||
path: "task_types.yaml"
|
||||
- name: "trust"
|
||||
path: "trust.yaml"
|
||||
- name: "two_part_trust"
|
||||
path: "two_part_trust.yaml"
|
||||
- name: "usage_frequency"
|
||||
path: "usage_frequency.yaml"
|
||||
- name: "usefulness"
|
||||
path: "usefulness.yaml"
|
||||
|
||||
subgroup_scales:
|
||||
user: "all"
|
||||
open_to_use: "all"
|
||||
ai_adoption_factors_open_question: "nonusers"
|
||||
ai_aversion_no_user: "nonusers"
|
||||
attitudes: "all"
|
||||
attitudes_toward_ai_decisions: "all"
|
||||
attitudes_toward_disclosure: "all"
|
||||
attitudes_usage: "all"
|
||||
context_of_use_user: "users"
|
||||
cognitive_selfesteem_thinking: "all"
|
||||
cognitive_selfesteem_memory: "all"
|
||||
cognitive_selfesteem_transactive_memory: "all"
|
||||
closeness_favorite_ai: "users"
|
||||
concerns_about_loss_of_autonomy_no_user: "nonusers"
|
||||
credibility_favorite_ai: "users"
|
||||
credibility_ai_no_user: "nonusers"
|
||||
creepiness_favorite_ai_user: "users"
|
||||
creepiness_ai_no_user: "nonusers"
|
||||
delegation_comfort: "all"
|
||||
distrust_toward_ai_corporations_no_user: "nonusers"
|
||||
ecological_concerns_no_user: "nonusers"
|
||||
effects_on_work: "all"
|
||||
enjoyment_favorite_ai_user: "users"
|
||||
ethical_concerns_general_no_user: "nonusers"
|
||||
choice_favorite_ai_user: "users"
|
||||
choice_favorite_ai_no_user: "nonusers"
|
||||
general_experience_ai: "all"
|
||||
intention_use_favorite_ai: "users"
|
||||
intention_use_no_user: "nonusers"
|
||||
knowledge: "all"
|
||||
knowledge_how_to_start_using_ai_no_user: "nonusers"
|
||||
lack_of_fomo_no_user: "nonusers"
|
||||
loneliness: "all"
|
||||
modality_favorite_ai: "users"
|
||||
need_to_belong: "all"
|
||||
need_for_cognition: "all"
|
||||
need_for_closure: "all"
|
||||
perceived_anthropomorphism_favorite_ai: "users"
|
||||
perceived_anthropomorphism_ai_no_user: "nonusers"
|
||||
perceived_intelligence_favorite_ai: "users"
|
||||
perceived_intelligence_ai_no_user: "nonusers"
|
||||
perceived_lack_of_need_no_user: "nonusers"
|
||||
perceived_reliance_on_ai_user: "users"
|
||||
perceived_role_ai_should_take_no_user: "nonusers"
|
||||
perceived_role_of_ai_favorite_ai: "users"
|
||||
perceived_role_of_ai_no_user: "nonusers"
|
||||
perception_tool_actor_favorite_ai: "users"
|
||||
perception_tool_actor_no_user: "nonusers"
|
||||
personality_specific_traits: "all"
|
||||
preference_for_status_quo_no_user: "nonusers"
|
||||
preferred_level_of_delegation: "all"
|
||||
reason_for_not_using_ai: "nonusers"
|
||||
risk_opportunity_perception: "all"
|
||||
security_concerns_no_user: "nonusers"
|
||||
self_efficacy_without_ai_creativity: "all"
|
||||
self_efficacy_without_ai_problem_solving: "all"
|
||||
self_efficacy_with_ai_creativity: "all"
|
||||
self_efficacy_with_ai_problem_solving: "all"
|
||||
social_presence_sense_favorite_ai: "users"
|
||||
social_presence_being_favorite_ai: "users"
|
||||
task_types_general: "users"
|
||||
task_types_favorite_ai: "users"
|
||||
trust_favorite_ai: "users"
|
||||
trust_competence: "users"
|
||||
trust_dependability: "users"
|
||||
general_ai_usage_frequency: "all"
|
||||
favorite_ai_usage_frequency: "users"
|
||||
usefulness_favorite_ai: "users"
|
||||
usefulness_no_user: "nonusers"
|
||||
|
||||
skip_scales:
|
||||
- context_of_use_no_user
|
||||
|
||||
composite_scales:
|
||||
cognitive_selfesteem_overall:
|
||||
scales: [cognitive_selfesteem_thinking, cognitive_selfesteem_memory, cognitive_selfesteem_transactive_memory]
|
||||
method: "weighted_mean"
|
||||
subgroup: "all"
|
||||
keep_subscales: true
|
||||
credibility_overall:
|
||||
scales: [credibility_favorite_ai, credibility_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
creepiness_overall:
|
||||
scales: [creepiness_favorite_ai_user, creepiness_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
favorite_ai_system_overall:
|
||||
scales: [choice_favorite_ai_user, choice_favorite_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
intention_use_overall:
|
||||
scales: [intention_use_favorite_ai, intention_use_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_anthropomorphism_overall:
|
||||
scales: [perceived_anthropomorphism_favorite_ai, perceived_anthropomorphism_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_intelligence_overall:
|
||||
scales: [perceived_intelligence_favorite_ai, perceived_intelligence_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perception_tool_actor_overall:
|
||||
scales: [perception_tool_actor_favorite_ai, perception_tool_actor_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
usefulness_overall:
|
||||
scales: [usefulness_favorite_ai, usefulness_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
329
config/waves/wave6.yaml
Normal file
329
config/waves/wave6.yaml
Normal file
@ -0,0 +1,329 @@
|
||||
wave: 6
|
||||
participant_id_column: "subj_id"
|
||||
|
||||
questionnaires:
|
||||
- name: "usage_and_experience"
|
||||
path: "usage_and_experience.yaml"
|
||||
- name: "agency"
|
||||
path: "agency.yaml"
|
||||
- name: "ai_adoption_factors_open_question"
|
||||
path: "ai_adoption_factors_open_question.yaml"
|
||||
- name: "ai_aversion"
|
||||
path: "ai_aversion.yaml"
|
||||
- name: "attitudes"
|
||||
path: "attitudes.yaml"
|
||||
- name: "attitudes_toward_ai_decisions"
|
||||
path: "attitudes_toward_ai_decisions.yaml"
|
||||
- name: "attitudes_toward_disclosure"
|
||||
path: "attitudes_toward_disclosure.yaml"
|
||||
- name: "attitudes_usage"
|
||||
path: "attitudes_usage.yaml"
|
||||
- name: "barrier_for_use"
|
||||
path: "barrier_for_use.yaml"
|
||||
- name: "change_in_writing_without_ai"
|
||||
path: "change_in_writing_without_ai.yaml"
|
||||
- name: "change_of_personal_role"
|
||||
path: "change_of_personal_role.yaml"
|
||||
- name: "context_of_use"
|
||||
path: "context_of_use.yaml"
|
||||
- name: "consequences_ai_use"
|
||||
path: "consequences_ai_use.yaml"
|
||||
- name: "cognitiv_selfesteem"
|
||||
path: "cognitiv_selfesteem.yaml"
|
||||
- name: "closeness"
|
||||
path: "closeness.yaml"
|
||||
- name: "companionship"
|
||||
path: "companionship.yaml"
|
||||
- name: "concerns_about_loss_of_autonomy"
|
||||
path: "concerns_about_loss_of_autonomy.yaml"
|
||||
- name: "credibility"
|
||||
path: "credibility.yaml"
|
||||
- name: "creepiness"
|
||||
path: "creepiness.yaml"
|
||||
- name: "delegation_comfort"
|
||||
path: "delegation_comfort.yaml"
|
||||
- name: "distrust_toward_ai_corporations"
|
||||
path: "distrust_toward_ai_corporations.yaml"
|
||||
- name: "ecological_concerns"
|
||||
path: "ecological_concerns.yaml"
|
||||
- name: "effect_on_behavior_toward_people"
|
||||
path: "effect_on_behavior_toward_people.yaml"
|
||||
- name: "effects_on_work"
|
||||
path: "effects_on_work.yaml"
|
||||
- name: "enjoyment"
|
||||
path: "enjoyment.yaml"
|
||||
- name: "ethical_concerns_delegation"
|
||||
path: "ethical_concerns_delegation.yaml"
|
||||
- name: "ethical_concerns_general"
|
||||
path: "ethical_concerns_general.yaml"
|
||||
- name: "parasocial_behavior"
|
||||
path: "parasocial_behavior.yaml"
|
||||
- name: "favourite_ai"
|
||||
path: "favorite_ai.yaml"
|
||||
- name: "general_experience_ai"
|
||||
path: "general_experience_ai.yaml"
|
||||
- name: "generalized_mind_perception"
|
||||
path: "generalized_mind_perception.yaml"
|
||||
- name: "hope_and_concern"
|
||||
path: "hope_and_concern.yaml"
|
||||
- name: "impact_of_delegation_on_skills"
|
||||
path: "impact_of_delegation_on_skills.yaml"
|
||||
- name: "impact_in_general_on_skills"
|
||||
path: "impact_in_general_on_skills.yaml"
|
||||
- name: "intention_usage"
|
||||
path: "intention_usage.yaml"
|
||||
- name: "knowledge"
|
||||
path: "knowledge.yaml"
|
||||
- name: "knowledge_how_to_start_using_ai"
|
||||
path: "knowledge_how_to_start_using_ai.yaml"
|
||||
- name: "lack_of_fomo"
|
||||
path: "lack_of_fomo.yaml"
|
||||
- name: "loneliness"
|
||||
path: "loneliness.yaml"
|
||||
- name: "microblog_and_social_network_usage"
|
||||
path: "microblog_and_social_network_usage.yaml"
|
||||
- name: "machine_heuristic"
|
||||
path: "machine_heuristic.yaml"
|
||||
- name: "mind_perception"
|
||||
path: "mind_perception.yaml"
|
||||
- name: "modality"
|
||||
path: "modality.yaml"
|
||||
- name: "needs"
|
||||
path: "needs.yaml"
|
||||
- name: "needs_satisfaction"
|
||||
path: "needs_satisfaction.yaml"
|
||||
- name: "number_of_tasks_delegated_to_ai"
|
||||
path: "number_of_tasks_delegated_to_ai.yaml"
|
||||
- name: "perceived_anthropomorphism"
|
||||
path: "perceived_anthropomorphism.yaml"
|
||||
- name: "perceived_changes_attitudes_usage"
|
||||
path: "perceived_changes_attitudes_usage.yaml"
|
||||
- name: "perceived_intelligence"
|
||||
path: "perceived_intelligence.yaml"
|
||||
- name: "perceived_lack_of_need"
|
||||
path: "perceived_lack_of_need.yaml"
|
||||
- name: "perceived_moral_agency"
|
||||
path: "perceived_moral_agency.yaml"
|
||||
- name: "perceived_reliance_on_ai"
|
||||
path: "perceived_reliance_on_ai.yaml"
|
||||
- name: "perceived_role_ai_should_take"
|
||||
path: "perceived_role_ai_should_take.yaml"
|
||||
- name: "perceived_role_of_ai"
|
||||
path: "perceived_role_of_ai.yaml"
|
||||
- name: "perception_of_being_left_behind"
|
||||
path: "perception_of_being_left_behind.yaml"
|
||||
- name: "perception_tool_actor"
|
||||
path: "perception_tool_actor.yaml"
|
||||
- name: "personality_specific_traits"
|
||||
path: "personality_specific_traits.yaml"
|
||||
- name: "potential_motivators_for_ai_usage"
|
||||
path: "potential_motivators_for_ai_usage.yaml"
|
||||
- name: "preference_for_status_quo"
|
||||
path: "preference_for_status_quo.yaml"
|
||||
- name: "preferred_level_of_delegation"
|
||||
path: "preferred_level_of_delegation.yaml"
|
||||
- name: "reason_for_not_using_ai"
|
||||
path: "reason_for_not_using_ai.yaml"
|
||||
- name: "risk_opportunity_perception"
|
||||
path: "risk_opportunity_perception.yaml"
|
||||
- name: "security_concerns"
|
||||
path: "security_concerns.yaml"
|
||||
- name: "self_efficacy"
|
||||
path: "self_efficacy.yaml"
|
||||
- name: "social_presence"
|
||||
path: "social_presence.yaml"
|
||||
- name: "task_types"
|
||||
path: "task_types.yaml"
|
||||
- name: "trust"
|
||||
path: "trust.yaml"
|
||||
- name: "two_part_trust"
|
||||
path: "two_part_trust.yaml"
|
||||
- name: "usage_frequency"
|
||||
path: "usage_frequency.yaml"
|
||||
- name: "usefulness"
|
||||
path: "usefulness.yaml"
|
||||
- name: "willingness_to_delegate"
|
||||
path: "willingness_to_delegate.yaml"
|
||||
- name: "willingness_to_delegate_change"
|
||||
path: "willingness_to_delegate_change.yaml"
|
||||
- name: "willingness_to_delegate_future"
|
||||
path: "willingness_to_delegate_future.yaml"
|
||||
|
||||
subgroup_scales:
|
||||
user: "all"
|
||||
open_to_use: "all"
|
||||
ai_adoption_factors_open_question: "nonusers"
|
||||
ai_aversion_no_user: "nonusers"
|
||||
agency_favorite_ai: "users"
|
||||
agency_no_user: "nonusers"
|
||||
attitudes: "all"
|
||||
attitudes_toward_ai_decisions: "all"
|
||||
attitudes_toward_disclosure: "all"
|
||||
attitudes_usage: "all"
|
||||
barrier_for_use: "all"
|
||||
change_in_writing_without_ai: "all"
|
||||
change_of_personal_role: "all"
|
||||
context_of_use_user: "users"
|
||||
consequences_ai_use_user: "users"
|
||||
consequences_ai_use_no_user: "nonusers"
|
||||
cognitive_selfesteem_thinking: "all"
|
||||
cognitive_selfesteem_memory: "all"
|
||||
cognitive_selfesteem_transactive_memory: "all"
|
||||
closeness_favorite_ai: "users"
|
||||
companionship_favorite_ai: "users"
|
||||
concerns_about_loss_of_autonomy_no_user: "nonusers"
|
||||
credibility_favorite_ai: "users"
|
||||
credibility_ai_no_user: "nonusers"
|
||||
creepiness_favorite_ai_user: "users"
|
||||
creepiness_ai_no_user: "nonusers"
|
||||
delegation_comfort: "all"
|
||||
distrust_toward_ai_corporations_no_user: "nonusers"
|
||||
ecological_concerns_no_user: "nonusers"
|
||||
effect_on_behavior_toward_people_user: "users"
|
||||
effect_on_behavior_toward_people_no_user: "nonusers"
|
||||
effects_on_work: "all"
|
||||
enjoyment_favorite_ai_user: "users"
|
||||
ethical_concerns_delegation: "all"
|
||||
ethical_concerns_general_no_user: "nonusers"
|
||||
parasocial_behavior_favorite_ai: "users"
|
||||
parasocial_behavior_no_user: "nonusers"
|
||||
choice_favorite_ai_user: "users"
|
||||
choice_favorite_ai_no_user: "nonusers"
|
||||
general_experience_ai: "all"
|
||||
generalized_mind_perception_favorite_ai: "users"
|
||||
generalized_mind_perception_no_user: "nonusers"
|
||||
hope_and_concern: "all"
|
||||
impact_of_delegation_on_skills: "all"
|
||||
impact_in_general_on_skills_user: "users"
|
||||
impact_in_general_on_skills_no_user: "nonusers"
|
||||
intention_use_favorite_ai: "users"
|
||||
intention_use_no_user: "nonusers"
|
||||
knowledge: "all"
|
||||
knowledge_how_to_start_using_ai_no_user: "nonusers"
|
||||
lack_of_fomo_no_user: "nonusers"
|
||||
loneliness: "all"
|
||||
machine_heuristic_1_favorite_ai: "users"
|
||||
machine_heuristic_1_no_user: "nonusers"
|
||||
machine_heuristic_2_favorite_ai: "users"
|
||||
machine_heuristic_2_no_user: "nonusers"
|
||||
mind_perception_favorite_ai: "users"
|
||||
mind_perception_no_user: "nonusers"
|
||||
modality_favorite_ai: "users"
|
||||
need_to_belong: "all"
|
||||
need_for_cognition: "all"
|
||||
need_for_closure: "all"
|
||||
number_of_tasks_delegated_to_ai: "all"
|
||||
perceived_anthropomorphism_favorite_ai: "users"
|
||||
perceived_anthropomorphism_ai_no_user: "nonusers"
|
||||
perceived_changes_attitudes_usage: "users"
|
||||
perceived_intelligence_favorite_ai: "users"
|
||||
perceived_intelligence_ai_no_user: "nonusers"
|
||||
perceived_lack_of_need_no_user: "nonusers"
|
||||
perceived_moral_agency_favorite_ai: "users"
|
||||
perceived_moral_agency_no_user: "nonusers"
|
||||
perceived_reliance_on_ai_user: "users"
|
||||
perceived_role_ai_should_take_no_user: "nonusers"
|
||||
perceived_role_of_ai_favorite_ai: "users"
|
||||
perceived_role_of_ai_no_user: "nonusers"
|
||||
perception_of_being_left_behind: "all"
|
||||
perception_tool_actor_favorite_ai: "users"
|
||||
perception_tool_actor_no_user: "nonusers"
|
||||
personality_specific_traits: "all"
|
||||
potential_motivators_for_ai_usage: "all"
|
||||
preference_for_status_quo_no_user: "nonusers"
|
||||
preferred_level_of_delegation: "all"
|
||||
reason_for_not_using_ai: "nonusers"
|
||||
risk_opportunity_perception: "all"
|
||||
security_concerns_no_user: "nonusers"
|
||||
self_efficacy_without_ai_creativity: "all"
|
||||
self_efficacy_without_ai_problem_solving: "all"
|
||||
self_efficacy_with_ai_creativity: "all"
|
||||
self_efficacy_with_ai_problem_solving: "all"
|
||||
social_presence_sense_favorite_ai: "users"
|
||||
social_presence_being_favorite_ai: "users"
|
||||
task_types_general: "users"
|
||||
task_types_favorite_ai: "users"
|
||||
trust_favorite_ai: "users"
|
||||
trust_competence: "users"
|
||||
trust_dependability: "users"
|
||||
general_ai_usage_frequency: "all"
|
||||
favorite_ai_usage_frequency: "users"
|
||||
usefulness_favorite_ai: "users"
|
||||
usefulness_no_user: "nonusers"
|
||||
willingness_to_delegate: "all"
|
||||
willingness_to_delegate_change: "all"
|
||||
willingness_to_delegate_future: "all"
|
||||
|
||||
skip_scales:
|
||||
- context_of_use_no_user
|
||||
|
||||
composite_scales:
|
||||
agency_overall:
|
||||
scales: [agency_favorite_ai, agency_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
cognitive_selfesteem_overall:
|
||||
scales: [cognitive_selfesteem_thinking, cognitive_selfesteem_memory, cognitive_selfesteem_transactive_memory]
|
||||
method: "weighted_mean"
|
||||
subgroup: "all"
|
||||
keep_subscales: true
|
||||
credibility_overall:
|
||||
scales: [credibility_favorite_ai, credibility_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
creepiness_overall:
|
||||
scales: [creepiness_favorite_ai_user, creepiness_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
effect_on_behavior_toward_people_overall:
|
||||
scales: [effect_on_behavior_toward_people_user, effect_on_behavior_toward_people_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
favorite_ai_system_overall:
|
||||
scales: [choice_favorite_ai_user, choice_favorite_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
impact_in_general_on_skills_overall:
|
||||
scales: [impact_in_general_on_skills_user, impact_in_general_on_skills_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
intention_use_overall:
|
||||
scales: [intention_use_favorite_ai, intention_use_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
machine_heuristic_1_overall:
|
||||
scales: [machine_heuristic_1_favorite_ai, machine_heuristic_1_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
machine_heuristic_2_overall:
|
||||
scales: [machine_heuristic_2_favorite_ai, machine_heuristic_2_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
mind_perception_overall:
|
||||
scales: [mind_perception_favorite_ai, mind_perception_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
parasocial_behavior_overall:
|
||||
scales: [parasocial_behavior_favorite_ai, parasocial_behavior_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_anthropomorphism_overall:
|
||||
scales: [perceived_anthropomorphism_favorite_ai, perceived_anthropomorphism_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_intelligence_overall:
|
||||
scales: [perceived_intelligence_favorite_ai, perceived_intelligence_ai_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perceived_moral_agency_overall:
|
||||
scales: [perceived_moral_agency_favorite_ai, perceived_moral_agency_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
perception_tool_actor_overall:
|
||||
scales: [perception_tool_actor_favorite_ai, perception_tool_actor_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
usefulness_overall:
|
||||
scales: [usefulness_favorite_ai, usefulness_no_user]
|
||||
method: "coalesce"
|
||||
subgroup: "all"
|
||||
51
pyproject.toml
Normal file
51
pyproject.toml
Normal file
@ -0,0 +1,51 @@
|
||||
[build-system]
|
||||
requires = ["flit_core>=3.2"]
|
||||
build-backend = "flit_core.buildapi"
|
||||
|
||||
[project]
|
||||
name = "HMC_preprocessing"
|
||||
version = "1.0.1"
|
||||
description = "preprocessing for the longitudianl study on AI usage (Human Machine Communication project)"
|
||||
authors = [
|
||||
{name = "Gerrit Anders", email = "g.anders@iwm-tuebingen.de"}
|
||||
]
|
||||
readme = "README.md"
|
||||
license = {text = "GPL-3.0-or-later"}
|
||||
classifiers = [
|
||||
"Programming Language :: Python :: 3",
|
||||
"Programming Language :: Python :: 3.10",
|
||||
"Programming Language :: Python :: 3.11",
|
||||
"License :: OSI Approved :: GNU General Public License v3 (GPLv3)",
|
||||
"Operating System :: OS Independent"
|
||||
]
|
||||
requires-python = ">=3.10"
|
||||
dependencies = [
|
||||
"pandas==2.2.3",
|
||||
"PyYAML==6.0.2",
|
||||
"pypandoc~=1.15",
|
||||
"openpyxl==3.1.5",
|
||||
]
|
||||
|
||||
[project.optional-dependencies]
|
||||
dev = [
|
||||
"pytest-cov>=4.0",
|
||||
"mypy>=1.0",
|
||||
"ruff>=0.11",
|
||||
"black>=21.0",
|
||||
"types-PyYAML",
|
||||
"pre-commit>=3.0",
|
||||
]
|
||||
|
||||
[tool.black]
|
||||
line-length = 88
|
||||
|
||||
[tool.ruff]
|
||||
line-length = 88
|
||||
|
||||
[tool.ruff.lint]
|
||||
extend-select = [
|
||||
"UP", # pyupgrade
|
||||
]
|
||||
|
||||
[tool.ruff.lint.pydocstyle]
|
||||
convention = "google"
|
||||
36
settings-example.yaml
Normal file
36
settings-example.yaml
Normal file
@ -0,0 +1,36 @@
|
||||
# Global settings for the longitudinal study data processing
|
||||
|
||||
# Path data folder
|
||||
data_directory: "data"
|
||||
|
||||
# Folder containing all questionnaire YAMLs (relative or absolute path)
|
||||
questionnaire_directory: "config/questionnaires"
|
||||
|
||||
# Map wave numbers to the data file for that wave
|
||||
data_file_for_each_wave:
|
||||
1: "HMC_wave1_cleaned.csv"
|
||||
2: "HMC_wave2_cleaned.csv"
|
||||
3: "HMC_wave3_cleaned.csv"
|
||||
4: "HMC_wave4_cleaned.csv"
|
||||
5: "HMC_wave5_cleaned.csv"
|
||||
6: "HMC_wave6_cleaned.csv"
|
||||
|
||||
# explicit map from wave number to its config
|
||||
config_file_for_each_wave:
|
||||
1: "config/waves/wave1.yaml"
|
||||
2: "config/waves/wave2.yaml"
|
||||
3: "config/waves/wave3.yaml"
|
||||
4: "config/waves/wave4.yaml"
|
||||
5: "config/waves/wave5.yaml"
|
||||
6: "config/waves/wave6.yaml"
|
||||
|
||||
# configurate the output database
|
||||
output:
|
||||
database_path: "results/HMC_data.sqlite"
|
||||
export_csv: true
|
||||
export_excel: true
|
||||
csv_output_directory: "results/csv"
|
||||
excel_output_directory: "results/excel"
|
||||
|
||||
# Name of the created PDF file for database documentation (optional)
|
||||
api_reference_pdf: "database_api_reference.pdf"
|
||||
0
src/__init__.py
Normal file
0
src/__init__.py
Normal file
121
src/composite_processor.py
Normal file
121
src/composite_processor.py
Normal file
@ -0,0 +1,121 @@
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def process_composites(
|
||||
dataframe: pd.DataFrame,
|
||||
composite_scale_specifications: dict,
|
||||
wave_alphas: dict[str, float | None],
|
||||
scale_item_counts: dict[str, int] | None = None,
|
||||
) -> tuple[pd.DataFrame, dict]:
|
||||
"""Compute composite scale columns based on provided specifications.
|
||||
|
||||
Iterates over composite scale definitions and calculates new columns using the specified aggregation method
|
||||
('mean', 'sum', 'weighted_mean', 'coalesce'). Supports subgroup filtering if specified.
|
||||
|
||||
Args:
|
||||
dataframe (pd.DataFrame): DataFrame containing all computed scales.
|
||||
composite_scale_specifications (dict): Dictionary with composite scale definitions from the wave config.
|
||||
wave_alphas (dict[str, float] | None): Existing dictionary of Cronbach's alpha values for scales in the wave.
|
||||
|
||||
Returns:
|
||||
tuple[pd.DataFrame, dict[str, float]]: DataFrame containing the new composite scale columns
|
||||
and updated alpha values dictionary.
|
||||
|
||||
Raises:
|
||||
ValueError: If required columns are missing, or an unknown method is specified.
|
||||
NotImplementedError: If the 'categorical' method is requested.
|
||||
"""
|
||||
composites: dict = {}
|
||||
updated_alphas: dict = {}
|
||||
|
||||
for (
|
||||
composite_scale_name,
|
||||
composite_scale_specification,
|
||||
) in composite_scale_specifications.items():
|
||||
scale_columns: list = composite_scale_specification.get("scales", [])
|
||||
method: str = composite_scale_specification.get("method", "mean")
|
||||
subgroup: str = composite_scale_specification.get("subgroup", "all")
|
||||
if len(scale_columns) == 0:
|
||||
continue
|
||||
missing: list[str] = [
|
||||
col for col in scale_columns if col not in dataframe.columns
|
||||
]
|
||||
if missing:
|
||||
raise ValueError(
|
||||
f"Missing columns for composite {composite_scale_name}: {missing}"
|
||||
)
|
||||
|
||||
mask: pd.Series = pd.Series(True, dataframe.index)
|
||||
if subgroup and subgroup != "all" and subgroup in dataframe.columns:
|
||||
mask = dataframe[subgroup].astype(bool)
|
||||
|
||||
dataframe_subset: pd.DataFrame = dataframe.loc[mask, scale_columns]
|
||||
|
||||
if method == "mean":
|
||||
composite_scores: pd.Series = dataframe_subset.mean(axis=1)
|
||||
elif method == "sum":
|
||||
composite_scores = dataframe_subset.sum(axis=1)
|
||||
elif method == "weighted_mean":
|
||||
weights_spec = composite_scale_specification.get("weights")
|
||||
|
||||
if weights_spec is not None:
|
||||
weights = pd.Series(weights_spec, dtype="float64")
|
||||
weights = weights.reindex(scale_columns)
|
||||
if weights.isna().any():
|
||||
missing_weights = weights[weights.isna()].index.tolist()
|
||||
raise ValueError(
|
||||
f"Composite {composite_scale_name}: Missing weights for scales {missing_weights}"
|
||||
)
|
||||
elif scale_item_counts is not None:
|
||||
weights = pd.Series(
|
||||
[scale_item_counts.get(col, 1) for col in scale_columns],
|
||||
index=scale_columns,
|
||||
dtype="float64",
|
||||
)
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Composite {composite_scale_name}: No weights specified and no scale_item_counts provided."
|
||||
)
|
||||
|
||||
weighted_values = dataframe_subset.mul(weights, axis=1)
|
||||
|
||||
numerator = weighted_values.sum(axis=1, skipna=True)
|
||||
denom_weights = dataframe_subset.notna().mul(weights, axis=1)
|
||||
denominator = denom_weights.sum(axis=1)
|
||||
|
||||
composite_scores = numerator / denominator
|
||||
composite_scores = composite_scores.where(denominator > 0, pd.NA)
|
||||
|
||||
elif method == "categorical":
|
||||
raise NotImplementedError(
|
||||
"'categorical' method is not supported as a composite aggregation (use 'coalesce')."
|
||||
)
|
||||
elif method == "coalesce":
|
||||
|
||||
def coalesce_row(row: pd.Series):
|
||||
present: pd.Series = row.notna()
|
||||
if present.sum() > 1:
|
||||
raise ValueError(
|
||||
f"Composite '{composite_scale_name}': More than one non-missing value in row (participant_id={dataframe.loc[row.name, 'participant_id']}): {row[present].to_dict()}"
|
||||
)
|
||||
return row[present].iloc[0] if present.any() else pd.NA
|
||||
|
||||
composite_scores = dataframe_subset.apply(coalesce_row, axis=1)
|
||||
|
||||
constituent_alphas = [
|
||||
wave_alphas.get(col)
|
||||
for col in scale_columns
|
||||
if wave_alphas and col in wave_alphas and wave_alphas[col] is not None
|
||||
]
|
||||
|
||||
if constituent_alphas:
|
||||
updated_alphas[composite_scale_name] = constituent_alphas
|
||||
|
||||
else:
|
||||
raise ValueError(f"Unknown composite method: {method}")
|
||||
|
||||
result_column: pd.Series = pd.Series(pd.NA, index=dataframe.index)
|
||||
result_column[mask] = composite_scores
|
||||
composites[composite_scale_name] = result_column
|
||||
|
||||
return pd.DataFrame(composites), updated_alphas
|
||||
175
src/process_all_waves.py
Normal file
175
src/process_all_waves.py
Normal file
@ -0,0 +1,175 @@
|
||||
from typing import Any
|
||||
from logging import Logger
|
||||
|
||||
import pandas as pd
|
||||
|
||||
from src.scale_processor import ScaleProcessor
|
||||
from src.composite_processor import process_composites
|
||||
from src.utils.data_loader import assemble_wave_info, load_yaml
|
||||
|
||||
|
||||
class DataPreprocessingAllWaves:
|
||||
"""Class for preprocessing data across all waves of the study.
|
||||
|
||||
This class loads data and configuration for each wave, processes scales and composites,
|
||||
and returns preprocessed DataFrames for each wave.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, data_with_configs: dict, settings: dict[str, Any], logger: Logger
|
||||
):
|
||||
"""Initialize the preprocessing class with data and settings.
|
||||
|
||||
Args:
|
||||
data_with_configs (dict): Dictionary mapping wave numbers to their data and config paths.
|
||||
settings (dict[str, Any]): Project settings loaded from the settings file.
|
||||
"""
|
||||
self.data_with_configs: dict = data_with_configs
|
||||
self.settings: dict[str, Any] = settings
|
||||
self.logger: Logger = logger
|
||||
self.cronbachs_alphas: dict[str, dict[int, float]] = {}
|
||||
|
||||
def _aggregate_cronbachs_alpha_values(
|
||||
self,
|
||||
scale_name: str,
|
||||
alpha_value: float | None,
|
||||
wave_number: int,
|
||||
coalesced: bool = False,
|
||||
) -> None:
|
||||
"""Aggregate Cronbach's alpha values across waves.
|
||||
|
||||
Args:
|
||||
scale_name (str): Name of the scale.
|
||||
alpha_value (float | None): Cronbach's alpha value for the scale.
|
||||
wave_number (int): Current wave number.
|
||||
coalesced (bool): Whether this is a coalesced composite scale.
|
||||
"""
|
||||
if alpha_value is None:
|
||||
return
|
||||
|
||||
if scale_name not in self.cronbachs_alphas:
|
||||
self.cronbachs_alphas[scale_name] = {}
|
||||
|
||||
self.cronbachs_alphas[scale_name][wave_number] = alpha_value
|
||||
|
||||
def preprocess_data(self) -> dict[int, pd.DataFrame]:
|
||||
"""Preprocess data for all waves.
|
||||
|
||||
Loads configuration for each wave, processes scales and composite scales,
|
||||
and returns a dictionary of preprocessed DataFrames indexed by wave number.
|
||||
|
||||
Returns:
|
||||
dict[int, pd.DataFrame]: Dictionary mapping wave numbers to their preprocessed DataFrames.
|
||||
|
||||
Raises:
|
||||
ValueError: If required configuration keys or columns are missing.
|
||||
"""
|
||||
all_preprocessed: dict = {}
|
||||
|
||||
for wave_number, data_of_wave in self.data_with_configs.items():
|
||||
data = data_of_wave["data"]
|
||||
config_path = data_of_wave["config_path"]
|
||||
wave_config = load_yaml(config_path)
|
||||
|
||||
participant_id_column = wave_config.get("participant_id_column")
|
||||
if participant_id_column is None:
|
||||
raise ValueError(
|
||||
f"Wave {wave_number}: Required key 'participant_id_column' missing in config '{config_path}'."
|
||||
)
|
||||
if participant_id_column not in data.columns:
|
||||
raise ValueError(
|
||||
f"Wave {wave_number}: Participant ID column '{participant_id_column}' not found in the data for config '{config_path}'."
|
||||
)
|
||||
|
||||
(
|
||||
scale_dict,
|
||||
subgroup_scales,
|
||||
skip_scales,
|
||||
composite_scales,
|
||||
) = assemble_wave_info(config_path, self.settings)
|
||||
scale_dfs: list = []
|
||||
all_scale_outputs: list = []
|
||||
|
||||
scale_item_counts: dict[str, int] = {}
|
||||
|
||||
for scale_name, subgroup in subgroup_scales.items():
|
||||
if scale_name in skip_scales:
|
||||
continue
|
||||
|
||||
if scale_name not in scale_dict:
|
||||
raise ValueError(
|
||||
f"Scale {scale_name} not in loaded scale configs (check YAML)."
|
||||
)
|
||||
|
||||
scale_config = scale_dict[scale_name]
|
||||
number_items = len(scale_config.get("items", []))
|
||||
output_scale_name = scale_config.get("output", scale_name)
|
||||
scale_item_counts[output_scale_name] = number_items
|
||||
|
||||
scale_processor: ScaleProcessor = ScaleProcessor(
|
||||
scale_config, logger=self.logger, subgroup_name=subgroup
|
||||
)
|
||||
scale_dataframe: pd.DataFrame = scale_processor.process(data)
|
||||
scale_dfs.append(scale_dataframe)
|
||||
all_scale_outputs.extend(scale_dataframe.columns.tolist())
|
||||
|
||||
output_name = scale_processor.output
|
||||
self._aggregate_cronbachs_alpha_values(
|
||||
output_name,
|
||||
scale_processor.cronbachs_alpha,
|
||||
wave_number,
|
||||
coalesced=False,
|
||||
)
|
||||
|
||||
result_dataframe: pd.DataFrame = pd.concat(
|
||||
[data[[participant_id_column]], *scale_dfs], axis=1
|
||||
)
|
||||
|
||||
constituent_outputs: set = set()
|
||||
if composite_scales:
|
||||
wave_alpha_dict = {
|
||||
scale_name: waves.get(wave_number)
|
||||
for scale_name, waves in self.cronbachs_alphas.items()
|
||||
if wave_number in waves
|
||||
}
|
||||
|
||||
composite_dataframe, updated_alphas = process_composites(
|
||||
result_dataframe,
|
||||
composite_scales,
|
||||
wave_alpha_dict,
|
||||
scale_item_counts,
|
||||
)
|
||||
|
||||
for scale_name, alpha_value in updated_alphas.items():
|
||||
self._aggregate_cronbachs_alpha_values(
|
||||
scale_name, alpha_value, wave_number, coalesced=True
|
||||
)
|
||||
|
||||
composite_output_names: list = list(composite_dataframe.columns)
|
||||
|
||||
for composite_scale in composite_scales.values():
|
||||
if composite_scale.get("keep_subscales", False):
|
||||
continue
|
||||
|
||||
if "scales" in composite_scale:
|
||||
constituent_outputs.update(composite_scale["scales"])
|
||||
result_dataframe = pd.concat(
|
||||
[result_dataframe, composite_dataframe], axis=1
|
||||
)
|
||||
|
||||
columns_to_keep: list = (
|
||||
[participant_id_column]
|
||||
+ composite_output_names
|
||||
+ [
|
||||
col
|
||||
for col in result_dataframe.columns
|
||||
if col not in constituent_outputs
|
||||
and col not in composite_output_names
|
||||
and col != participant_id_column
|
||||
]
|
||||
)
|
||||
result_dataframe = result_dataframe.loc[:, columns_to_keep]
|
||||
|
||||
all_preprocessed[wave_number] = result_dataframe
|
||||
|
||||
return all_preprocessed
|
||||
387
src/scale_processor.py
Normal file
387
src/scale_processor.py
Normal file
@ -0,0 +1,387 @@
|
||||
from logging import Logger
|
||||
from typing import Any
|
||||
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
|
||||
from src.utils.utlis import to_snake_case
|
||||
|
||||
|
||||
class ScaleProcessor:
|
||||
"""Processes a single scale for a given subgroup within a DataFrame.
|
||||
|
||||
This class supports various calculation types (mean, sum, categorical, ordinal, correct scoring, etc.)
|
||||
and handles item inversion, response mapping, and subgroup filtering.
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, scale_config: dict, logger: Logger, subgroup_name: str | None = None
|
||||
):
|
||||
"""Initialize the ScaleProcessor with scale configuration and optional subgroup.
|
||||
|
||||
Args:
|
||||
scale_config (dict): Dictionary containing scale configuration (name, items, calculation type, etc.).
|
||||
subgroup_name (str, optional): Name of the subgroup column for filtering. Defaults to None.
|
||||
"""
|
||||
self.name: str = scale_config["name"]
|
||||
self.items: list = scale_config["items"]
|
||||
self.calculation: str = scale_config.get("calculation", "mean")
|
||||
self.score_min, self.score_max = scale_config.get("score_range", (1, 5))
|
||||
self.response_options: dict = scale_config.get("response_options", {})
|
||||
self.missing_response_option: set = set(
|
||||
scale_config.get(" missing_response_option", [])
|
||||
)
|
||||
self.output: str = scale_config.get("output", self.name)
|
||||
self.subgroup: str | None = subgroup_name
|
||||
self.logger: Logger = logger
|
||||
self.cronbachs_alpha: float | None = None
|
||||
self.retain_single_items: bool = scale_config.get("retain_single_items", False)
|
||||
|
||||
def calculate_cronbachs_alpha(
|
||||
self, data_frame: pd.DataFrame, item_ids: list
|
||||
) -> float | None:
|
||||
"""Calculate Cronbach's alpha for internal consistency.
|
||||
|
||||
Args:
|
||||
data_frame (pd.DataFrame): DataFrame containing item responses.
|
||||
item_ids (list): List of item column names.
|
||||
|
||||
Returns:
|
||||
float | None: Cronbach's alpha value or None if calculation fails.
|
||||
"""
|
||||
if len(item_ids) < 2:
|
||||
return None
|
||||
|
||||
try:
|
||||
item_data: pd.DataFrame = data_frame[item_ids].dropna()
|
||||
|
||||
if len(item_data) < 10:
|
||||
self.logger.warning(
|
||||
f"Insufficient data for Cronbach's alpha calculation for scale {self.name}"
|
||||
)
|
||||
return None
|
||||
|
||||
inter_item_correlation_matrix = item_data.corr()
|
||||
number_of_items = len(item_ids)
|
||||
|
||||
mask = np.triu(np.ones_like(inter_item_correlation_matrix, dtype=bool), k=1)
|
||||
values = inter_item_correlation_matrix.values[mask]
|
||||
|
||||
average_inter_item_correlation = float(np.mean(values))
|
||||
|
||||
cronbachs_alpha = (number_of_items * average_inter_item_correlation) / (
|
||||
1 + (number_of_items - 1) * average_inter_item_correlation
|
||||
)
|
||||
|
||||
self.logger.info(
|
||||
f"Cronbach's alpha for scale {self.name}: {cronbachs_alpha:.4f}"
|
||||
)
|
||||
return cronbachs_alpha
|
||||
|
||||
except Exception as e:
|
||||
self.logger.warning(
|
||||
f"Could not calculate Cronbach's alpha for scale {self.name}: {e}"
|
||||
)
|
||||
return None
|
||||
|
||||
def check_items(self, dataframe: pd.DataFrame) -> None:
|
||||
"""Check if all required item columns are present in the DataFrame.
|
||||
|
||||
Args:
|
||||
dataframe (pd.DataFrame): Input DataFrame to check for required columns.
|
||||
|
||||
Raises:
|
||||
ValueError: If any required item columns are missing.
|
||||
"""
|
||||
missing: list = [
|
||||
item["id"] for item in self.items if item["id"] not in dataframe.columns
|
||||
]
|
||||
if missing:
|
||||
raise ValueError(f"Missing columns in data: {missing}")
|
||||
|
||||
def get_subgroup_mask(self, data_frame: pd.DataFrame) -> pd.Series:
|
||||
"""Create a boolean mask for the specified subgroup.
|
||||
|
||||
Args:
|
||||
data_frame (pd.DataFrame): Input DataFrame.
|
||||
|
||||
Returns:
|
||||
pd.Series: Boolean mask indicating rows belonging to the subgroup.
|
||||
"""
|
||||
if self.subgroup is None or self.subgroup.lower() == "all":
|
||||
return pd.Series(True, index=data_frame.index)
|
||||
|
||||
if self.subgroup in data_frame.columns:
|
||||
return data_frame[self.subgroup].astype(bool)
|
||||
|
||||
return pd.Series(True, index=data_frame.index)
|
||||
|
||||
def process(self, data_frame) -> pd.DataFrame:
|
||||
"""Process the scale for the given DataFrame and subgroup.
|
||||
|
||||
Applies the specified calculation type, handles item inversion, response mapping,
|
||||
and returns a DataFrame with the computed scale score.
|
||||
|
||||
Args:
|
||||
data_frame (pd.DataFrame): Input DataFrame containing item responses.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: DataFrame with the computed scale score column.
|
||||
|
||||
Raises:
|
||||
ValueError: If the calculation type is unknown, or required columns are missing.
|
||||
"""
|
||||
self.check_items(data_frame)
|
||||
item_ids: list = [item["id"] for item in self.items]
|
||||
|
||||
for item in self.items:
|
||||
if item.get("inverse", False):
|
||||
data_frame[item["id"]] = (
|
||||
self.score_max + self.score_min - data_frame[item["id"]]
|
||||
)
|
||||
|
||||
if (
|
||||
self.calculation in ["mean", "sum", "mean_correct", "sum_correct"]
|
||||
and len(item_ids) >= 2
|
||||
):
|
||||
self.cronbachs_alpha = self.calculate_cronbachs_alpha(data_frame, item_ids)
|
||||
|
||||
if any("correct" in item for item in self.items):
|
||||
correct_item_scoring_map: dict = {
|
||||
item["id"]: item["correct"] for item in self.items
|
||||
}
|
||||
dataframe_correct: pd.DataFrame = data_frame[item_ids].apply(
|
||||
lambda row: [
|
||||
int(row[cid] == correct_item_scoring_map[cid]) for cid in item_ids
|
||||
],
|
||||
axis=1,
|
||||
result_type="expand",
|
||||
)
|
||||
if self.calculation == "sum_correct":
|
||||
score = dataframe_correct.sum(axis=1)
|
||||
elif self.calculation == "mean_correct":
|
||||
score = dataframe_correct.mean(axis=1)
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Unknown calculation for objective items: {self.calculation}"
|
||||
)
|
||||
|
||||
elif self.calculation == "mapped_mean":
|
||||
mapped_values = self.response_options
|
||||
score = (
|
||||
data_frame[item_ids]
|
||||
.apply(
|
||||
lambda col: col.map(
|
||||
lambda x: (
|
||||
mapped_values.get(int(x), pd.NA)
|
||||
if not pd.isnull(x)
|
||||
else pd.NA
|
||||
)
|
||||
)
|
||||
)
|
||||
.mean(axis=1)
|
||||
)
|
||||
|
||||
elif self.calculation == "ordinal":
|
||||
if len(item_ids) != 1:
|
||||
raise ValueError(
|
||||
"calculation 'ordinal' only allowed with single-item scales"
|
||||
)
|
||||
category_map: dict = self.response_options
|
||||
if not isinstance(category_map, dict):
|
||||
raise ValueError(
|
||||
"For calculation 'ordinal', response_options must be a dict mapping."
|
||||
)
|
||||
categories: dict = {int(k): v for k, v in category_map.items()}
|
||||
score = data_frame[item_ids[0]].apply(
|
||||
lambda x: (
|
||||
categories.get(int(float(x)))
|
||||
if not pd.isnull(x)
|
||||
and str(int(float(x))) not in self.missing_response_option
|
||||
else pd.NA
|
||||
)
|
||||
)
|
||||
|
||||
elif self.calculation == "categorical":
|
||||
if len(item_ids) != 1:
|
||||
raise ValueError(
|
||||
"calculation 'categorical' is only for single-item scales"
|
||||
)
|
||||
category_map = self.response_options
|
||||
if not isinstance(category_map, dict):
|
||||
raise ValueError(
|
||||
"response_options must be a dict for calculation 'categorical'"
|
||||
)
|
||||
result = self._map_single_item(
|
||||
data_frame, item_ids[0], self.response_options
|
||||
)
|
||||
|
||||
item_specification: dict = self.items[0]
|
||||
open_ended_id: Any = item_specification.get("open_ended_id")
|
||||
if open_ended_id and open_ended_id in data_frame.columns:
|
||||
other_mask: pd.Series = data_frame[item_ids[0]].apply(
|
||||
lambda x: str(int(float(x))) == "10" if not pd.isnull(x) else False
|
||||
)
|
||||
result[self.output + "_other_text"] = data_frame[open_ended_id].where(
|
||||
other_mask, None
|
||||
)
|
||||
return result
|
||||
|
||||
elif self.calculation == "response":
|
||||
if len(item_ids) != 1:
|
||||
raise ValueError(
|
||||
"calculation 'response' can only be used with single-item scales!"
|
||||
)
|
||||
score = data_frame[item_ids[0]]
|
||||
|
||||
elif self.calculation == "boolean":
|
||||
if len(item_ids) != 1:
|
||||
raise ValueError("calculation 'boolean' is only for single-item scales")
|
||||
category_map = self.response_options
|
||||
if not isinstance(category_map, dict):
|
||||
raise ValueError(
|
||||
"response_options must be a dict for calculation 'boolean'"
|
||||
)
|
||||
|
||||
for v in category_map.values():
|
||||
if not isinstance(v, bool):
|
||||
raise ValueError(
|
||||
"response_options values for 'boolean' must be True/False"
|
||||
)
|
||||
|
||||
result = self._map_single_item(
|
||||
data_frame, item_ids[0], self.response_options
|
||||
)
|
||||
|
||||
result[self.output] = result[self.output].astype("boolean")
|
||||
|
||||
return result
|
||||
|
||||
elif self.calculation == "multiple_selection":
|
||||
result = pd.DataFrame(index=data_frame.index)
|
||||
|
||||
if not isinstance(self.response_options, dict):
|
||||
raise ValueError(
|
||||
f"response_options must be a dict for 'multiple_selection' in scale {self.name}"
|
||||
)
|
||||
|
||||
normalized = {
|
||||
str(int(float(k))): str(v).lower()
|
||||
for k, v in self.response_options.items()
|
||||
}
|
||||
|
||||
true_keys = {
|
||||
k
|
||||
for k, v in normalized.items()
|
||||
if v in ["selected", "true", "yes", "1"]
|
||||
}
|
||||
false_keys = {
|
||||
k
|
||||
for k, v in normalized.items()
|
||||
if v in ["not selected", "false", "no", "0"]
|
||||
}
|
||||
|
||||
if not true_keys or not false_keys:
|
||||
raise ValueError(
|
||||
f"response_options for scale {self.name} must define at least one True and one False value"
|
||||
)
|
||||
|
||||
for item in self.items:
|
||||
col_id = item["id"]
|
||||
|
||||
col_label = to_snake_case(item.get("label", item["text"]))
|
||||
new_col_name = f"{self.name}_{col_label}"
|
||||
|
||||
result[new_col_name] = data_frame[col_id].apply(
|
||||
lambda x: (
|
||||
True
|
||||
if _normalize_value(x) in true_keys
|
||||
else False if _normalize_value(x) in false_keys else pd.NA
|
||||
)
|
||||
)
|
||||
|
||||
result[new_col_name] = result[new_col_name].astype("boolean")
|
||||
|
||||
open_ended_id = item.get("open_ended_id")
|
||||
if open_ended_id and open_ended_id in data_frame.columns:
|
||||
result[new_col_name + "_other_text"] = data_frame[
|
||||
open_ended_id
|
||||
].where(
|
||||
data_frame[col_id].apply(
|
||||
lambda x: _normalize_value(x) in true_keys
|
||||
),
|
||||
None,
|
||||
)
|
||||
|
||||
return result
|
||||
|
||||
elif self.calculation in ["mean", "sum"]:
|
||||
values = data_frame[item_ids].apply(
|
||||
lambda col: col.map(self._apply_missing)
|
||||
)
|
||||
if self.calculation == "mean":
|
||||
score = values.mean(axis=1)
|
||||
else:
|
||||
score = values.sum(axis=1)
|
||||
|
||||
else:
|
||||
raise ValueError(
|
||||
f"Unknown calculation: {self.calculation} for scale {self.name}"
|
||||
)
|
||||
|
||||
mask = self.get_subgroup_mask(data_frame)
|
||||
result = pd.Series(pd.NA, index=data_frame.index)
|
||||
result[mask] = score[mask]
|
||||
result_data_frame: pd.DataFrame = pd.DataFrame({self.output: result})
|
||||
|
||||
if self.retain_single_items:
|
||||
for i, item_id in enumerate(item_ids, start=1):
|
||||
col_name = f"{self.name}-item_{i}"
|
||||
result_data_frame[col_name] = data_frame[item_id]
|
||||
|
||||
return result_data_frame
|
||||
|
||||
def _map_single_item(
|
||||
self, data_frame: pd.DataFrame, item_id: str, category_map: dict
|
||||
) -> pd.DataFrame:
|
||||
"""Map a single-item response using a category mapping.
|
||||
|
||||
Args:
|
||||
data_frame (pd.DataFrame): Input DataFrame.
|
||||
item_id (str): Column name of the item.
|
||||
category_map (dict): Mapping from raw response codes to output values.
|
||||
|
||||
Returns:
|
||||
pd.DataFrame: DataFrame with the mapped output column.
|
||||
"""
|
||||
score = data_frame[item_id].apply(
|
||||
lambda x: (
|
||||
category_map.get(str(int(float(x))))
|
||||
if not pd.isnull(x)
|
||||
and str(int(float(x))) not in self.missing_response_option
|
||||
else pd.NA
|
||||
)
|
||||
)
|
||||
return pd.DataFrame({self.output: score})
|
||||
|
||||
def _apply_missing(self, value: Any) -> Any:
|
||||
if pd.isnull(value):
|
||||
return pd.NA
|
||||
try:
|
||||
val = int(float(value))
|
||||
except Exception:
|
||||
return pd.NA
|
||||
if str(val) in self.missing_response_option:
|
||||
return pd.NA
|
||||
return val
|
||||
|
||||
|
||||
def _normalize_value(x: Any) -> str | None:
|
||||
"""Convert dataframe cell values to normalized string for comparison."""
|
||||
if pd.isna(x):
|
||||
return None
|
||||
try:
|
||||
# Handles floats like 1.0 -> "1"
|
||||
return str(int(float(x)))
|
||||
except Exception:
|
||||
return str(x).strip()
|
||||
0
src/utils/__init__.py
Normal file
0
src/utils/__init__.py
Normal file
152
src/utils/data_loader.py
Normal file
152
src/utils/data_loader.py
Normal file
@ -0,0 +1,152 @@
|
||||
# src/utils/data_loader.py
|
||||
|
||||
import logging
|
||||
import os
|
||||
|
||||
import yaml
|
||||
import pandas as pd
|
||||
|
||||
from typing import Any
|
||||
|
||||
|
||||
def load_yaml(path: str) -> dict[str, Any]:
|
||||
"""Load a YAML file and return its contents as a dictionary.
|
||||
|
||||
Args:
|
||||
path (str): Path to the YAML file.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Parsed YAML content.
|
||||
"""
|
||||
with open(path, encoding="utf-8") as file:
|
||||
return yaml.safe_load(file)
|
||||
|
||||
|
||||
class DataLoader:
|
||||
"""Class for loading survey data and configuration files for multiple waves."""
|
||||
|
||||
def __init__(
|
||||
self, settings: dict[str, Any], waves_to_process: list[int] | None = None
|
||||
):
|
||||
"""Initialize the DataLoader.
|
||||
|
||||
Args:
|
||||
settings (dict[str, Any]): Project settings containing data and config paths.
|
||||
waves_to_process (list[int] | None): List of wave numbers to process. If None, all waves are processed.
|
||||
"""
|
||||
self.data_directory: str = settings["data_directory"]
|
||||
self.data_file_for_each_wave: dict[int, str] = settings[
|
||||
"data_file_for_each_wave"
|
||||
]
|
||||
self.config_file_for_each_wave: dict[int, str] = settings[
|
||||
"config_file_for_each_wave"
|
||||
]
|
||||
self.waves_to_process = waves_to_process or self.data_file_for_each_wave.keys()
|
||||
|
||||
def load_all_survey_data(self) -> dict[pd.DataFrame, str]:
|
||||
"""Load survey data and configuration paths for all specified waves.
|
||||
|
||||
Returns:
|
||||
dict: Dictionary mapping wave numbers to their data and config path.
|
||||
"""
|
||||
data_by_wave: dict = {}
|
||||
for wave_number in self.waves_to_process:
|
||||
config_of_wave: str = self.config_file_for_each_wave[wave_number]
|
||||
filename_data_of_wave: str = self.data_file_for_each_wave[wave_number]
|
||||
dataframe_wave = pd.read_csv(
|
||||
os.path.join(self.data_directory, filename_data_of_wave)
|
||||
)
|
||||
data_by_wave[wave_number] = {
|
||||
"data": dataframe_wave,
|
||||
"config_path": config_of_wave,
|
||||
}
|
||||
return data_by_wave
|
||||
|
||||
|
||||
def load_questionnaire_scales(
|
||||
path_questionnaire: str, questionnaire_name: str
|
||||
) -> dict[str, Any]:
|
||||
"""Load scales from a questionnaire YAML file.
|
||||
|
||||
Args:
|
||||
path_questionnaire (str): Path to the questionnaire YAML file.
|
||||
questionnaire_name (str): Name of the questionnaire.
|
||||
|
||||
Returns:
|
||||
dict[str, Any]: Dictionary containing questionnaire scale information.
|
||||
"""
|
||||
questionnaire_config: dict[str, Any] = load_yaml(path_questionnaire)
|
||||
return {
|
||||
scale["name"]: {**scale, "questionnaire": questionnaire_name}
|
||||
for scale in questionnaire_config["scales"]
|
||||
}
|
||||
|
||||
|
||||
def assemble_wave_info(
|
||||
wave_config_path: str, settings: dict[str, Any]
|
||||
) -> tuple[dict[str, dict], dict[str, str], set, dict[str, Any]]:
|
||||
"""Assemble scale, subgroup, and composite information for a wave.
|
||||
|
||||
Args:
|
||||
wave_config_path (str): Path to the wave configuration YAML file.
|
||||
settings (dict[str, Any]): Project settings.
|
||||
|
||||
Returns:
|
||||
tuple: (scale_dictionary, final_subgroup_scales, composite_scales)
|
||||
scale_dictionary (dict): Dictionary containing questionnaire scale information.
|
||||
final_subgroup_scales (dict): Mapping of scale names to subgroup names.
|
||||
composite_scales (dict): Composite scale definitions.
|
||||
"""
|
||||
config_wave: dict = load_yaml(wave_config_path)
|
||||
scale_dictionary: dict = {}
|
||||
scales_by_questionnaire: dict = {}
|
||||
questionnaire_directory: str = settings["questionnaire_directory"]
|
||||
skip_scales = set(config_wave.get("skip_scales", []))
|
||||
|
||||
for questionnaire_info in config_wave["questionnaires"]:
|
||||
questionnaire_path: str = questionnaire_info["path"]
|
||||
|
||||
if not os.path.isabs(questionnaire_path):
|
||||
questionnaire_path = os.path.normpath(
|
||||
os.path.join(questionnaire_directory, questionnaire_path)
|
||||
)
|
||||
questionnaire_scales: dict[str, Any] = load_questionnaire_scales(
|
||||
questionnaire_path, questionnaire_info["name"]
|
||||
)
|
||||
scale_dictionary.update(questionnaire_scales)
|
||||
scales_by_questionnaire[questionnaire_info["name"]] = list(
|
||||
questionnaire_scales.keys()
|
||||
)
|
||||
|
||||
subgroup_scales_input: dict = config_wave.get("subgroup_scales", {})
|
||||
composite_scales: dict = config_wave.get("composite_scales", {})
|
||||
final_subgroup_scales: dict = {}
|
||||
|
||||
for questionnaire_name, scale_names in scales_by_questionnaire.items():
|
||||
if questionnaire_name in subgroup_scales_input:
|
||||
subgroup: str = subgroup_scales_input[questionnaire_name]
|
||||
for scale in scale_names:
|
||||
final_subgroup_scales[scale] = subgroup
|
||||
|
||||
for entry_name, subgroup in subgroup_scales_input.items():
|
||||
if entry_name in scale_dictionary:
|
||||
final_subgroup_scales[entry_name] = subgroup
|
||||
|
||||
for scale in scale_dictionary:
|
||||
if scale not in final_subgroup_scales:
|
||||
questionnaire_name = scale_dictionary[scale].get("questionnaire", "unknown")
|
||||
logging.info(
|
||||
f"Scale '{scale}' (from questionnaire '{questionnaire_name}') "
|
||||
f"has no subgroup specified; assigning to subgroup 'all'."
|
||||
)
|
||||
final_subgroup_scales[scale] = "all"
|
||||
|
||||
for entry_name in subgroup_scales_input:
|
||||
if (
|
||||
entry_name not in scale_dictionary
|
||||
and entry_name not in scales_by_questionnaire
|
||||
):
|
||||
raise ValueError(
|
||||
f"Entry '{entry_name}' in subgroup_scales is not a loaded scale or questionnaire name."
|
||||
)
|
||||
return scale_dictionary, final_subgroup_scales, skip_scales, composite_scales
|
||||
450
src/utils/database_documentation_generator.py
Normal file
450
src/utils/database_documentation_generator.py
Normal file
@ -0,0 +1,450 @@
|
||||
import os
|
||||
from logging import Logger
|
||||
from pathlib import Path
|
||||
from typing import Any
|
||||
|
||||
import pypandoc
|
||||
|
||||
from src.utils.data_loader import load_yaml
|
||||
from src.utils.utlis import to_snake_case
|
||||
|
||||
|
||||
def generate_pdf_from_markdown(markdown_path: str, pdf_path: str) -> str:
|
||||
"""Convert a Markdown file to PDF using pypandoc.
|
||||
|
||||
Args:
|
||||
markdown_path (str): Path to the input Markdown file.
|
||||
pdf_path (str): Path where the output PDF should be saved.
|
||||
|
||||
Returns:
|
||||
str: Output path of the generated PDF.
|
||||
"""
|
||||
output: str = pypandoc.convert_file(markdown_path, "pdf", outputfile=pdf_path)
|
||||
return output
|
||||
|
||||
|
||||
def infer_data_type(scale: dict[str, Any], is_composite=False) -> str:
|
||||
"""Infer the database data type for a scale or composite scale.
|
||||
|
||||
Args:
|
||||
scale (dict[str, Any]): Scale or composite scale specification.
|
||||
is_composite (bool, optional): Whether the scale is a composite. Defaults to False.
|
||||
|
||||
Returns:
|
||||
str: Inferred data type as string (e.g., 'FLOAT', 'INTEGER', 'TEXT').
|
||||
"""
|
||||
if is_composite:
|
||||
method = scale.get("method", "")
|
||||
if method in ("mean", "mean_correct"):
|
||||
return "FLOAT"
|
||||
elif method in ("sum", "sum_correct"):
|
||||
return "INTEGER"
|
||||
else:
|
||||
return "TEXT"
|
||||
calculation = scale.get("calculation", "")
|
||||
if calculation in ("mean", "response", "mapped_mean"):
|
||||
return "FLOAT"
|
||||
if calculation in ("sum", "sum_correct"):
|
||||
return "INTEGER"
|
||||
if calculation in ("categorical", "ordinal"):
|
||||
return "TEXT"
|
||||
if calculation == "boolean":
|
||||
return "BOOLEAN"
|
||||
if calculation in "multiple_selection":
|
||||
return "BOOLEAN"
|
||||
return "TEXT"
|
||||
|
||||
|
||||
def get_aggregate_range(scale: dict[str, Any], is_composite: bool = False) -> str:
|
||||
"""Calculate the aggregated score range or category count for a scale.
|
||||
|
||||
Args:
|
||||
scale (dict[str, Any]): Scale or composite scale specification.
|
||||
is_composite (bool, optional): Whether the scale is a composite. Defaults to False.
|
||||
|
||||
Returns:
|
||||
str: Aggregated score range or category description.
|
||||
"""
|
||||
calculation: str = scale.get("calculation", "")
|
||||
items: list = scale.get("items", [])
|
||||
item_count: int = len(items)
|
||||
score_range: list | None = scale.get("score_range")
|
||||
if is_composite:
|
||||
method: str = scale.get("method", "")
|
||||
if method in ("mean", "mean_correct"):
|
||||
return "See constituent scales"
|
||||
elif method in ("sum", "sum_correct"):
|
||||
return "See constituent scales"
|
||||
else:
|
||||
return ""
|
||||
if calculation == "mean" and score_range and item_count:
|
||||
return f"{score_range[0]}–{score_range[1]}"
|
||||
if calculation == "sum" and score_range and item_count:
|
||||
minimum_total_score: int = item_count * score_range[0]
|
||||
maximum_total_score: int = item_count * score_range[1]
|
||||
return f"{minimum_total_score}–{maximum_total_score}"
|
||||
if calculation == "sum_correct":
|
||||
return f"0–{item_count}"
|
||||
if calculation == "mapped_mean" and items:
|
||||
if "response_options" in scale and isinstance(scale["response_options"], dict):
|
||||
mapped_values = list(scale["response_options"].values())
|
||||
min_score = min(mapped_values)
|
||||
max_score = max(mapped_values)
|
||||
if item_count > 1:
|
||||
return f"{min_score}–{max_score}"
|
||||
if calculation == "response" and score_range:
|
||||
return f"{score_range[0]}–{score_range[1]}"
|
||||
if calculation in ("categorical", "ordinal"):
|
||||
if isinstance(scale.get("response_options"), dict):
|
||||
missing_response_option = set(scale.get("missing_response_option", []))
|
||||
valid_options = {
|
||||
k: v
|
||||
for k, v in scale["response_options"].items()
|
||||
if k not in missing_response_option
|
||||
}
|
||||
number_of_categories = len(valid_options)
|
||||
return f"{number_of_categories} categories"
|
||||
if calculation == "boolean":
|
||||
return "Boolean (true / false)"
|
||||
if calculation == "multiple_selection":
|
||||
return "Boolean (selected / not selected)"
|
||||
return ""
|
||||
|
||||
|
||||
def get_item_details(scale: dict) -> tuple[list[str], str | None]:
|
||||
"""Extract item texts and summarize score ranges for a scale.
|
||||
|
||||
Args:
|
||||
scale (dict): Scale specification.
|
||||
|
||||
Returns:
|
||||
tuple: (List of item descriptions, summarized score range or None)
|
||||
"""
|
||||
calculation: str = scale.get("calculation", "")
|
||||
items: list = scale.get("items", [])
|
||||
|
||||
if calculation == "multiple_selection":
|
||||
lines: list = []
|
||||
for item in items:
|
||||
label = item.get("label") or item.get("text", "")
|
||||
col_name = f"{scale['name']}_{to_snake_case(label)}"
|
||||
lines.append(f"**{label}** → column `{col_name}` (boolean)")
|
||||
if "open_ended_id" in item:
|
||||
lines.append(
|
||||
f" _(Open-ended responses captured in: {item['open_ended_id']})_"
|
||||
)
|
||||
return lines, None
|
||||
|
||||
if calculation in ("boolean", "mapped_mean") and "response_options" in scale:
|
||||
response_options = scale["response_options"]
|
||||
missing_response_option = set(scale.get("missing_response_option", []))
|
||||
lines = []
|
||||
|
||||
for item in items:
|
||||
item_text = item.get("text", "")
|
||||
lines.append(f"**{item_text}**")
|
||||
|
||||
for code, label in response_options.items():
|
||||
if code not in missing_response_option:
|
||||
lines.append(f" {code}: {label}")
|
||||
|
||||
if "open_ended_id" in item:
|
||||
lines.append(
|
||||
f" _(Open-ended responses captured in: {item['open_ended_id']})_"
|
||||
)
|
||||
|
||||
return lines, None
|
||||
|
||||
if calculation in ("categorical", "ordinal") and "response_options" in scale:
|
||||
response_options = scale["response_options"]
|
||||
missing_response_option = set(scale.get("missing_response_option", []))
|
||||
lines = []
|
||||
|
||||
for item in items:
|
||||
item_text = item.get("text", "")
|
||||
lines.append(f"**{item_text}**")
|
||||
|
||||
for code, label in response_options.items():
|
||||
if code in missing_response_option:
|
||||
lines.append(f" {code}: {label} _(treated as NA)_")
|
||||
else:
|
||||
lines.append(f" {code}: {label}")
|
||||
|
||||
if "open_ended_id" in item:
|
||||
lines.append(
|
||||
f" _(Open-ended responses captured in: {item['open_ended_id']})_"
|
||||
)
|
||||
|
||||
return lines, None
|
||||
|
||||
all_ranges: list = [
|
||||
item.get("score_range", scale.get("score_range")) for item in items
|
||||
]
|
||||
if all_ranges and all(score_range == all_ranges[0] for score_range in all_ranges):
|
||||
score_range_summary = all_ranges[0]
|
||||
else:
|
||||
score_range_summary = None
|
||||
|
||||
lines = []
|
||||
for item in items:
|
||||
line = item.get("text", "")
|
||||
notes: list = []
|
||||
if calculation == "sum_correct" and "correct" in item:
|
||||
value = item["correct"]
|
||||
notes.append("correct" if value in [1, "1", True] else "incorrect")
|
||||
if item.get("inverse", False):
|
||||
notes.append("inversed")
|
||||
if notes:
|
||||
line += " (" + ", ".join(notes) + ")"
|
||||
lines.append(line)
|
||||
|
||||
return lines, score_range_summary
|
||||
|
||||
|
||||
def get_item_score_ranges(scale: dict) -> list[str]:
|
||||
"""Get score ranges for each item in a scale.
|
||||
|
||||
Args:
|
||||
scale (dict): Scale specification.
|
||||
|
||||
Returns:
|
||||
list[str]: List of item IDs and their score ranges.
|
||||
"""
|
||||
ranges: list = []
|
||||
for item in scale.get("items", []):
|
||||
score_range = item.get("score_range")
|
||||
if score_range:
|
||||
ranges.append(f"{item['id']}: {score_range[0]}–{score_range[1]}")
|
||||
else:
|
||||
ranges.append(f"{item['id']}: N/A")
|
||||
return ranges
|
||||
|
||||
|
||||
def render_table(table: list, headers: list) -> str:
|
||||
"""Render a Markdown table from a list of dictionaries.
|
||||
|
||||
Args:
|
||||
table (list): List of row dictionaries.
|
||||
headers (list): List of column headers.
|
||||
|
||||
Returns:
|
||||
str: Markdown-formatted table as string.
|
||||
"""
|
||||
output: list[str] = [
|
||||
"| " + " | ".join(headers) + " |",
|
||||
"| " + " | ".join(["---"] * len(headers)) + " |",
|
||||
]
|
||||
for row in table:
|
||||
output.append(
|
||||
"| " + " | ".join(str(row.get(header, "")) for header in headers) + " |"
|
||||
)
|
||||
return "\n".join(output)
|
||||
|
||||
|
||||
def generate_db_api_reference(
|
||||
settings: dict[str, Any],
|
||||
logger: Logger,
|
||||
cronbachs_alphas: dict | None = None,
|
||||
output_path: str = "results/database_api_reference.md",
|
||||
) -> None:
|
||||
"""Generate a Markdown (and optional PDF) documentation of the database API.
|
||||
|
||||
Args:
|
||||
settings (dict[str, Any]): Project settings containing paths and options.
|
||||
logger: Logger object for status messages.
|
||||
cronbachs_alphas (dict[str, float] | None): Dictionary of Cronbach's alpha values by scale name.
|
||||
output_path (str, optional): Path for the Markdown output file. Default to 'database_api_reference.md'.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
questionnaire_directory: str = settings["questionnaire_directory"]
|
||||
questionnaire_file_for_each_wave: dict[int, str] = settings[
|
||||
"config_file_for_each_wave"
|
||||
]
|
||||
|
||||
alpha_dict = cronbachs_alphas or {}
|
||||
|
||||
questionnaire_catalog: dict = {}
|
||||
for filename in os.listdir(questionnaire_directory):
|
||||
if filename.endswith(".yaml"):
|
||||
file_path = os.path.join(questionnaire_directory, filename)
|
||||
questionnaire = load_yaml(file_path)
|
||||
questionnaire_catalog[questionnaire["questionnaire"]] = {
|
||||
"file": file_path,
|
||||
"cfg": questionnaire,
|
||||
}
|
||||
|
||||
variables_data: dict[str, dict] = {}
|
||||
item_details: dict = {}
|
||||
ordinal_categorical_mappings: dict = {}
|
||||
|
||||
for wave_number, config_path_of_wave in questionnaire_file_for_each_wave.items():
|
||||
wave_config: dict = load_yaml(config_path_of_wave)
|
||||
subgroup_scales: dict = wave_config.get("subgroup_scales", {})
|
||||
|
||||
for questionnaire_info in wave_config["questionnaires"]:
|
||||
questionnaire_file: str = questionnaire_info["path"]
|
||||
if not os.path.isabs(questionnaire_file):
|
||||
questionnaire_file = os.path.normpath(
|
||||
os.path.join(questionnaire_directory, questionnaire_file)
|
||||
)
|
||||
questionnaire_config: dict = load_yaml(questionnaire_file)
|
||||
for scale in questionnaire_config["scales"]:
|
||||
output: str = scale.get("output", scale["name"])
|
||||
reference: str = scale.get("reference", "")
|
||||
data_type: str = infer_data_type(scale)
|
||||
range_aggregated_scale: str = get_aggregate_range(scale)
|
||||
|
||||
applied_to_group: str = subgroup_scales.get(output, "all")
|
||||
|
||||
alpha_per_wave = alpha_dict.get(output, {})
|
||||
if isinstance(alpha_per_wave, dict) and alpha_per_wave:
|
||||
alpha_str = ", ".join(
|
||||
f"wave{wave}: {val:.4f}"
|
||||
for wave, val in sorted(alpha_per_wave.items())
|
||||
)
|
||||
else:
|
||||
alpha_str = "Not applicable"
|
||||
|
||||
if output not in variables_data:
|
||||
variables_data[output] = {
|
||||
"Column Name": output,
|
||||
"Data Type": data_type,
|
||||
"Waves": [],
|
||||
"Questionnaire": questionnaire_config.get(
|
||||
"questionnaire", os.path.basename(questionnaire_file)
|
||||
),
|
||||
"File": os.path.basename(questionnaire_file),
|
||||
"Scale Name": scale["name"],
|
||||
"Calculation": scale.get("calculation", ""),
|
||||
"Score Range/Categories": range_aggregated_scale,
|
||||
"Description": scale.get("label", ""),
|
||||
"Cronbach's $\\alpha$": alpha_str,
|
||||
"Applied To Group": applied_to_group,
|
||||
"Reference": reference,
|
||||
"Retain Single Items": (
|
||||
"Yes" if scale.get("retain_single_items", False) else "No"
|
||||
),
|
||||
}
|
||||
item_lines, score_range_summary = get_item_details(scale)
|
||||
item_details[output] = (item_lines, score_range_summary)
|
||||
if (
|
||||
scale.get("calculation") in ("ordinal", "categorical")
|
||||
and "response_options" in scale
|
||||
):
|
||||
ordinal_categorical_mappings[output] = scale["response_options"]
|
||||
|
||||
wave_label = f"wave{wave_number}"
|
||||
if wave_label not in variables_data[output]["Waves"]:
|
||||
variables_data[output]["Waves"].append(wave_label)
|
||||
|
||||
for composite_scale, composite_specifications in wave_config.get(
|
||||
"composite_scales", {}
|
||||
).items():
|
||||
alpha_per_wave = alpha_dict.get(composite_scale, {})
|
||||
if isinstance(alpha_per_wave, dict) and alpha_per_wave:
|
||||
formatted = []
|
||||
for wave, val in sorted(alpha_per_wave.items()):
|
||||
formatted.append(
|
||||
f"wave{wave}: " + ", ".join(f"{v:.4f}" for v in val)
|
||||
)
|
||||
alpha_str = "; ".join(formatted)
|
||||
else:
|
||||
alpha_str = "Not applicable"
|
||||
|
||||
data_type = infer_data_type(composite_specifications, is_composite=True)
|
||||
range_aggregated_scale = get_aggregate_range(
|
||||
composite_specifications, is_composite=True
|
||||
)
|
||||
|
||||
applied_to_group = composite_specifications.get("subgroup", "")
|
||||
|
||||
if composite_scale not in variables_data:
|
||||
variables_data[composite_scale] = {
|
||||
"Column Name": composite_scale,
|
||||
"Data Type": data_type,
|
||||
"Waves": [],
|
||||
"Questionnaire": "composite",
|
||||
"File": "",
|
||||
"Scale Name": composite_scale,
|
||||
"Calculation": composite_specifications.get("method", ""),
|
||||
"Score Range/Categories": range_aggregated_scale,
|
||||
"Cronbach's $\\alpha$": alpha_str,
|
||||
"Description": f"Composite of {', '.join(composite_specifications['scales'])}",
|
||||
"Applied To Group": applied_to_group,
|
||||
"Reference": "",
|
||||
"Retain Single Items": "See constituent scales",
|
||||
}
|
||||
item_details[composite_scale] = (
|
||||
[
|
||||
f"This is a composite scale. See constituent scales: "
|
||||
f"{', '.join(composite_specifications['scales'])}"
|
||||
],
|
||||
None,
|
||||
)
|
||||
|
||||
wave_label = f"wave{wave_number}"
|
||||
if wave_label not in variables_data[composite_scale]["Waves"]:
|
||||
variables_data[composite_scale]["Waves"].append(wave_label)
|
||||
|
||||
rows: list = []
|
||||
for var_data in variables_data.values():
|
||||
row_data = var_data.copy()
|
||||
row_data["Wave"] = ", ".join(sorted(row_data["Waves"]))
|
||||
del row_data["Waves"]
|
||||
rows.append(row_data)
|
||||
|
||||
Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
with open(output_path, "w", encoding="utf-8") as out:
|
||||
out.write("# Database API Reference\n")
|
||||
out.write(
|
||||
"\nThis document was autogenerated from source YAMLs for full provenance and data transparency.\n"
|
||||
"Please note that if scales were coalesced into one scale during the processing (named _overall) the subscales are removed and the info remains only for documentation purposes.\n"
|
||||
)
|
||||
|
||||
out.write("## Data Columns and Scales by Wave\n\n")
|
||||
headers: list[str] = [
|
||||
"Column Name",
|
||||
"Data Type",
|
||||
"Wave",
|
||||
"Questionnaire",
|
||||
"File",
|
||||
"Scale Name",
|
||||
"Cronbach's $\\alpha$",
|
||||
"Calculation",
|
||||
"Score Range/Categories",
|
||||
"Description",
|
||||
"Applied To Group",
|
||||
"Reference",
|
||||
"Retain Single Items",
|
||||
]
|
||||
out.write(render_table(rows, headers) + "\n\n")
|
||||
|
||||
out.write("## Item Details\n\n")
|
||||
for column, items in item_details.items():
|
||||
out.write(f"### {column}\n")
|
||||
for line in items[0]:
|
||||
out.write(f"- {line}\n")
|
||||
item_score_range = items[1]
|
||||
if item_score_range and all(x is not None for x in item_score_range):
|
||||
out.write(
|
||||
f"\nItem score range: {item_score_range[0]}–{item_score_range[1]}\n"
|
||||
)
|
||||
out.write("\n")
|
||||
|
||||
out.write(
|
||||
"\n---\n_Auto-generated documentation. Edit or supplement with study usage notes as needed._\n"
|
||||
)
|
||||
|
||||
logger.info(f"Database API reference written to {output_path}")
|
||||
|
||||
pdf_out: str | None = settings.get("api_reference_pdf") or None
|
||||
if pdf_out:
|
||||
try:
|
||||
Path(pdf_out).parent.mkdir(parents=True, exist_ok=True)
|
||||
generate_pdf_from_markdown(output_path, pdf_out)
|
||||
logger.info(f"Database API reference PDF written to {pdf_out}")
|
||||
except Exception as e:
|
||||
logger.warning(f"Could not create PDF API reference: {e}")
|
||||
64
src/utils/database_populator.py
Normal file
64
src/utils/database_populator.py
Normal file
@ -0,0 +1,64 @@
|
||||
# src/utils/database_populator.py
|
||||
|
||||
import sqlite3
|
||||
from pathlib import Path
|
||||
from sqlite3 import Connection
|
||||
|
||||
import pandas as pd
|
||||
|
||||
|
||||
def populate_database(
|
||||
preprocessed_data_all_waves: dict[int, pd.DataFrame],
|
||||
database_path: str = "results/study_results.sqlite",
|
||||
export_csv: bool = False,
|
||||
export_excel: bool = False,
|
||||
csv_output_directory: str = "results",
|
||||
excel_output_directory: str = "results",
|
||||
) -> None:
|
||||
"""Populate an SQLite database with preprocessed data for all waves. One table per wave.
|
||||
Optionally export each wave as a separate CSV and/or Excel file.
|
||||
|
||||
Args:
|
||||
preprocessed_data_all_waves (pd.DataFrame): Dictionary mapping wave numbers to their corresponding DataFrames.
|
||||
database_path (str, optional): Path to the SQLite database file. Defaults to "study_results.sqlite".
|
||||
export_csv (bool, optional): Whether to export CSV files for each wave. Defaults to False.
|
||||
export_excel (bool, optional): Whether to export Excel files for each wave. Defaults to False.
|
||||
csv_output_directory (str, optional): Directory for CSV output files. Defaults to "results".
|
||||
excel_output_directory (str, optional): Directory for Excel output files. Defaults to "results".
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
Path(database_path).parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
connection: Connection = sqlite3.connect(database_path)
|
||||
try:
|
||||
for wave, dataframe in preprocessed_data_all_waves.items():
|
||||
table_name = f"wave{wave}"
|
||||
dataframe.to_sql(table_name, connection, if_exists="replace", index=False)
|
||||
finally:
|
||||
connection.close()
|
||||
|
||||
csv_directory: Path = Path(csv_output_directory)
|
||||
excel_directory: Path = Path(excel_output_directory)
|
||||
|
||||
if export_csv:
|
||||
csv_directory.mkdir(parents=True, exist_ok=True)
|
||||
if export_excel:
|
||||
excel_directory.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
for wave, dataframe in preprocessed_data_all_waves.items():
|
||||
data_for_export = dataframe.copy()
|
||||
|
||||
bool_columns = data_for_export.select_dtypes(include="bool").columns
|
||||
data_for_export[bool_columns] = data_for_export[bool_columns].astype("Int8")
|
||||
|
||||
if export_csv:
|
||||
csv_filename = csv_directory / f"HMC_wave{wave}_preprocessed.csv"
|
||||
data_for_export.to_csv(csv_filename, index=False, na_rep="NA")
|
||||
|
||||
if export_excel:
|
||||
excel_filename = excel_directory / f"HMC_wave{wave}_preprocessed.xlsx"
|
||||
data_for_export.to_excel(
|
||||
excel_filename, index=False, na_rep="NA", engine="openpyxl"
|
||||
)
|
||||
54
src/utils/logging_config.py
Normal file
54
src/utils/logging_config.py
Normal file
@ -0,0 +1,54 @@
|
||||
# src/utils/logging_config.py
|
||||
|
||||
import logging
|
||||
import logging.config
|
||||
import os
|
||||
from typing import Any
|
||||
from datetime import datetime
|
||||
|
||||
|
||||
def setup_logging(default_level: int = logging.INFO) -> None:
|
||||
"""Set up the logging configuration for the application.
|
||||
|
||||
This function configures both console and file logging with a standard format.
|
||||
The log file is named 'processing.log' and uses UTF-8 encoding.
|
||||
|
||||
Args:
|
||||
default_level (int, optional): The default logging level (e.g., logging.INFO). Defaults to logging.INFO.
|
||||
|
||||
Returns:
|
||||
None
|
||||
"""
|
||||
log_directory: str = "logs"
|
||||
if not os.path.exists(log_directory):
|
||||
os.makedirs(log_directory)
|
||||
timestamp: str = datetime.now().strftime("%Y%m%d_%H%M%S")
|
||||
log_path: str = os.path.join(log_directory, f"processing_{timestamp}.log")
|
||||
|
||||
logging_config: dict[str, Any] = {
|
||||
"version": 1,
|
||||
"disable_existing_loggers": False,
|
||||
"formatters": {
|
||||
"standard": {"format": "%(asctime)s [%(levelname)s] %(name)s: %(message)s"},
|
||||
},
|
||||
"handlers": {
|
||||
"console": {
|
||||
"level": default_level,
|
||||
"class": "logging.StreamHandler",
|
||||
"formatter": "standard",
|
||||
},
|
||||
"file": {
|
||||
"level": default_level,
|
||||
"class": "logging.FileHandler",
|
||||
"formatter": "standard",
|
||||
"filename": log_path,
|
||||
"encoding": "utf8",
|
||||
},
|
||||
},
|
||||
"root": {
|
||||
"handlers": ["console", "file"],
|
||||
"level": default_level,
|
||||
},
|
||||
}
|
||||
|
||||
logging.config.dictConfig(logging_config)
|
||||
16
src/utils/settings_loader.py
Normal file
16
src/utils/settings_loader.py
Normal file
@ -0,0 +1,16 @@
|
||||
from typing import Any
|
||||
|
||||
import yaml
|
||||
|
||||
|
||||
def load_settings(settings_path: str = "settings.yaml") -> dict[str, Any]:
|
||||
"""Load project settings from a YAML file.
|
||||
|
||||
Args:
|
||||
settings_path (str, optional): Path to the settings YAML file. Default to "settings.yaml".
|
||||
|
||||
Returns:
|
||||
dict: Dictionary containing the loaded settings.
|
||||
"""
|
||||
with open(settings_path, encoding="utf-8") as file:
|
||||
return yaml.safe_load(file)
|
||||
17
src/utils/utlis.py
Normal file
17
src/utils/utlis.py
Normal file
@ -0,0 +1,17 @@
|
||||
import re
|
||||
|
||||
|
||||
def to_snake_case(text: str) -> str:
|
||||
"""
|
||||
Convert a given text to snake_case format.
|
||||
|
||||
Args:
|
||||
text (str): The input text to be converted.
|
||||
|
||||
Returns:
|
||||
str: The converted text in snake_case format.
|
||||
"""
|
||||
text = text.strip().lower()
|
||||
text = re.sub(r"[^\w\s]", "", text)
|
||||
text = re.sub(r"\s+", "_", text)
|
||||
return text
|
||||
0
tests/__init__.py
Normal file
0
tests/__init__.py
Normal file
382
tests/test_data_loader.py
Normal file
382
tests/test_data_loader.py
Normal file
@ -0,0 +1,382 @@
|
||||
import pytest
|
||||
import pandas as pd
|
||||
from unittest.mock import patch, mock_open
|
||||
from src.utils.data_loader import (
|
||||
load_yaml,
|
||||
DataLoader,
|
||||
load_questionnaire_scales,
|
||||
assemble_wave_info,
|
||||
)
|
||||
|
||||
|
||||
class TestLoadYaml:
|
||||
def test_yaml_file_loads_correctly(self):
|
||||
yaml_content = "key1: value1\nkey2:\n nested: value2"
|
||||
with patch("builtins.open", mock_open(read_data=yaml_content)):
|
||||
result = load_yaml("test.yaml")
|
||||
assert result == {"key1": "value1", "key2": {"nested": "value2"}}
|
||||
|
||||
def test_yaml_file_not_found_raises_exception(self):
|
||||
with pytest.raises(FileNotFoundError):
|
||||
load_yaml("nonexistent.yaml")
|
||||
|
||||
def test_yaml_file_with_empty_content(self):
|
||||
yaml_content = ""
|
||||
with patch("builtins.open", mock_open(read_data=yaml_content)):
|
||||
result = load_yaml("test.yaml")
|
||||
assert result is None
|
||||
|
||||
def test_yaml_file_with_invalid_syntax_raises_exception(self):
|
||||
yaml_content = "invalid: yaml: content: ["
|
||||
with patch("builtins.open", mock_open(read_data=yaml_content)):
|
||||
with pytest.raises(Exception):
|
||||
load_yaml("test.yaml")
|
||||
|
||||
|
||||
class TestDataLoader:
|
||||
def test_dataloader_initializes_with_all_waves_when_none_specified(self):
|
||||
settings = {
|
||||
"data_directory": "/data",
|
||||
"data_file_for_each_wave": {1: "wave1.csv", 2: "wave2.csv"},
|
||||
"config_file_for_each_wave": {1: "config1.yaml", 2: "config2.yaml"},
|
||||
}
|
||||
loader = DataLoader(settings)
|
||||
assert loader.waves_to_process == settings["data_file_for_each_wave"].keys()
|
||||
|
||||
def test_dataloader_initializes_with_specified_waves(self):
|
||||
settings = {
|
||||
"data_directory": "/data",
|
||||
"data_file_for_each_wave": {1: "wave1.csv", 2: "wave2.csv", 3: "wave3.csv"},
|
||||
"config_file_for_each_wave": {
|
||||
1: "config1.yaml",
|
||||
2: "config2.yaml",
|
||||
3: "config3.yaml",
|
||||
},
|
||||
}
|
||||
loader = DataLoader(settings, [1, 3])
|
||||
assert loader.waves_to_process == [1, 3]
|
||||
|
||||
def test_dataloader_stores_settings_correctly(self):
|
||||
settings = {
|
||||
"data_directory": "/test/data",
|
||||
"data_file_for_each_wave": {1: "test.csv"},
|
||||
"config_file_for_each_wave": {1: "test.yaml"},
|
||||
}
|
||||
loader = DataLoader(settings)
|
||||
assert loader.data_directory == "/test/data"
|
||||
assert loader.data_file_for_each_wave == {1: "test.csv"}
|
||||
assert loader.config_file_for_each_wave == {1: "test.yaml"}
|
||||
|
||||
@patch("pandas.read_csv")
|
||||
def test_dataloader_loads_survey_data_for_specified_waves(self, mock_read_csv):
|
||||
import os
|
||||
|
||||
mock_df = pd.DataFrame({"col1": [1, 2], "col2": [3, 4]})
|
||||
mock_read_csv.return_value = mock_df
|
||||
|
||||
settings = {
|
||||
"data_directory": "/data",
|
||||
"data_file_for_each_wave": {1: "wave1.csv", 2: "wave2.csv"},
|
||||
"config_file_for_each_wave": {1: "config1.yaml", 2: "config2.yaml"},
|
||||
}
|
||||
loader = DataLoader(settings, [1])
|
||||
result = loader.load_all_survey_data()
|
||||
|
||||
assert 1 in result
|
||||
assert "data" in result[1]
|
||||
assert "config_path" in result[1]
|
||||
assert result[1]["config_path"] == "config1.yaml"
|
||||
|
||||
expected_path = os.path.join("/data", "wave1.csv")
|
||||
mock_read_csv.assert_called_once_with(expected_path)
|
||||
|
||||
@patch("pandas.read_csv")
|
||||
def test_dataloader_loads_multiple_waves(self, mock_read_csv):
|
||||
mock_df1 = pd.DataFrame({"wave1_col": [1, 2]})
|
||||
mock_df2 = pd.DataFrame({"wave2_col": [3, 4]})
|
||||
mock_read_csv.side_effect = [mock_df1, mock_df2]
|
||||
|
||||
settings = {
|
||||
"data_directory": "/data",
|
||||
"data_file_for_each_wave": {1: "wave1.csv", 2: "wave2.csv"},
|
||||
"config_file_for_each_wave": {1: "config1.yaml", 2: "config2.yaml"},
|
||||
}
|
||||
loader = DataLoader(settings, [1, 2])
|
||||
result = loader.load_all_survey_data()
|
||||
|
||||
assert len(result) == 2
|
||||
assert 1 in result and 2 in result
|
||||
assert result[1]["config_path"] == "config1.yaml"
|
||||
assert result[2]["config_path"] == "config2.yaml"
|
||||
|
||||
@patch("pandas.read_csv")
|
||||
def test_dataloader_handles_csv_read_error(self, mock_read_csv):
|
||||
mock_read_csv.side_effect = FileNotFoundError("CSV file not found")
|
||||
|
||||
settings = {
|
||||
"data_directory": "/data",
|
||||
"data_file_for_each_wave": {1: "nonexistent.csv"},
|
||||
"config_file_for_each_wave": {1: "config1.yaml"},
|
||||
}
|
||||
loader = DataLoader(settings, [1])
|
||||
|
||||
with pytest.raises(FileNotFoundError):
|
||||
loader.load_all_survey_data()
|
||||
|
||||
|
||||
class TestLoadQuestionnaireScales:
|
||||
def test_questionnaire_scales_loads_from_valid_yaml(self):
|
||||
yaml_content = """
|
||||
scales:
|
||||
- name: scale1
|
||||
items: [item1, item2]
|
||||
- name: scale2
|
||||
items: [item3, item4]
|
||||
"""
|
||||
with patch("builtins.open", mock_open(read_data=yaml_content)):
|
||||
with patch("yaml.safe_load") as mock_yaml:
|
||||
mock_yaml.return_value = {
|
||||
"scales": [
|
||||
{"name": "scale1", "items": ["item1", "item2"]},
|
||||
{"name": "scale2", "items": ["item3", "item4"]},
|
||||
]
|
||||
}
|
||||
result = load_questionnaire_scales("test.yaml", questionnaire_name="q1")
|
||||
assert "scale1" in result
|
||||
assert "scale2" in result
|
||||
assert result["scale1"]["items"] == ["item1", "item2"]
|
||||
|
||||
def test_questionnaire_scales_handles_empty_scales_list(self):
|
||||
yaml_content = """
|
||||
scales: []
|
||||
"""
|
||||
with patch("builtins.open", mock_open(read_data=yaml_content)):
|
||||
with patch("yaml.safe_load") as mock_yaml:
|
||||
mock_yaml.return_value = {"scales": []}
|
||||
result = load_questionnaire_scales("test.yaml", questionnaire_name="q1")
|
||||
assert result == {}
|
||||
|
||||
def test_questionnaire_scales_loads_complex_structure(self):
|
||||
with patch("builtins.open", mock_open()):
|
||||
with patch("yaml.safe_load") as mock_yaml:
|
||||
mock_yaml.return_value = {
|
||||
"questionnaire": "test_questionnaire",
|
||||
"scales": [
|
||||
{
|
||||
"name": "choice_favorite_ai_user",
|
||||
"label": "Choice of favorite AI system",
|
||||
"calculation": "categorical",
|
||||
"response_options": {"1": "ChatGPT", "2": "Claude"},
|
||||
"output": "choice_favorite_ai_user",
|
||||
}
|
||||
],
|
||||
}
|
||||
result = load_questionnaire_scales("test.yaml", questionnaire_name="q1")
|
||||
assert "choice_favorite_ai_user" in result
|
||||
assert result["choice_favorite_ai_user"]["calculation"] == "categorical"
|
||||
|
||||
def test_questionnaire_scales_handles_missing_scales_key(self):
|
||||
with patch("builtins.open", mock_open()):
|
||||
with patch("yaml.safe_load") as mock_yaml:
|
||||
mock_yaml.return_value = {"questionnaire": "test"}
|
||||
with pytest.raises(KeyError):
|
||||
load_questionnaire_scales("test.yaml", questionnaire_name="q1")
|
||||
|
||||
|
||||
class TestAssembleWaveInfo:
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
@patch("os.path.isabs")
|
||||
@patch("os.path.normpath")
|
||||
@patch("os.path.join")
|
||||
def test_wave_info_assembles_with_absolute_questionnaire_paths(
|
||||
self, mock_join, mock_normpath, mock_isabs, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_isabs.return_value = True
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/absolute/path/q1.yaml"}]
|
||||
}
|
||||
mock_load_scales.return_value = {
|
||||
"scale1": {"items": ["item1", "item2"], "questionnaire": "q1"},
|
||||
"scale2": {"items": ["item3", "item4"], "questionnaire": "q1"},
|
||||
}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
mock_load_scales.assert_called_once_with("/absolute/path/q1.yaml", "q1")
|
||||
assert "scale1" in result[0]
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
@patch("os.path.isabs")
|
||||
@patch("os.path.normpath")
|
||||
@patch("os.path.join")
|
||||
def test_wave_info_assembles_with_relative_questionnaire_paths(
|
||||
self, mock_join, mock_normpath, mock_isabs, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_isabs.return_value = False
|
||||
mock_join.return_value = "/base/relative/q1.yaml"
|
||||
mock_normpath.return_value = "/base/relative/q1.yaml"
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "relative/q1.yaml"}]
|
||||
}
|
||||
mock_load_scales.return_value = {"scale1": {"questionnaire": "q1"}}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert "scale1" in result[0]
|
||||
|
||||
mock_join.assert_called_once_with("/base", "relative/q1.yaml")
|
||||
mock_load_scales.assert_called_once_with("/base/relative/q1.yaml", "q1")
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
@patch("logging.info")
|
||||
def test_wave_info_assigns_all_subgroup_to_scales_without_subgroup(
|
||||
self, mock_log, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/path/q1.yaml"}]
|
||||
}
|
||||
mock_load_scales.return_value = {"scale1": {"questionnaire": "q1"}}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert result[1]["scale1"] == "all"
|
||||
mock_log.assert_called_once()
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
def test_wave_info_handles_subgroup_scales_by_questionnaire_name(
|
||||
self, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/path/q1.yaml"}],
|
||||
"subgroup_scales": {"q1": "group1"},
|
||||
}
|
||||
mock_load_scales.return_value = {
|
||||
"scale1": {"questionnaire": "q1"},
|
||||
"scale2": {"questionnaire": "q1"},
|
||||
}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert result[1]["scale1"] == "group1"
|
||||
assert result[1]["scale2"] == "group1"
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
def test_wave_info_handles_subgroup_scales_by_scale_name(
|
||||
self, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/path/q1.yaml"}],
|
||||
"subgroup_scales": {"scale1": "specific_group"},
|
||||
}
|
||||
mock_load_scales.return_value = {
|
||||
"scale1": {"questionnaire": "q1"},
|
||||
"scale2": {"questionnaire": "q1"},
|
||||
}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert result[1]["scale1"] == "specific_group"
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
def test_wave_info_raises_error_for_invalid_subgroup_entry(
|
||||
self, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/path/q1.yaml"}],
|
||||
"subgroup_scales": {"nonexistent": "group1"},
|
||||
}
|
||||
mock_load_scales.return_value = {"scale1": {"questionnaire": "q1"}}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="Entry 'nonexistent' in subgroup_scales is not a loaded scale or questionnaire name",
|
||||
):
|
||||
assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
def test_wave_info_returns_composite_scales_when_present(
|
||||
self, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/path/q1.yaml"}],
|
||||
"composite_scales": {"composite1": {"items": ["scale1", "scale2"]}},
|
||||
}
|
||||
mock_load_scales.return_value = {"scale1": {"questionnaire": "q1"}}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert "composite1" in result[3]
|
||||
assert result[3]["composite1"]["items"] == ["scale1", "scale2"]
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
def test_wave_info_returns_empty_composite_scales_when_absent(
|
||||
self, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/path/q1.yaml"}]
|
||||
}
|
||||
mock_load_scales.return_value = {"scale1": {"questionnaire": "q1"}}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert result[3] == {}
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
def test_wave_info_handles_multiple_questionnaires(
|
||||
self, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [
|
||||
{"name": "q1", "path": "/path/q1.yaml"},
|
||||
{"name": "q2", "path": "/path/q2.yaml"},
|
||||
]
|
||||
}
|
||||
mock_load_scales.side_effect = [
|
||||
{"scale1": {"questionnaire": "q1"}},
|
||||
{"scale2": {"questionnaire": "q2"}},
|
||||
]
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert "scale1" in result[0]
|
||||
assert "scale2" in result[0]
|
||||
assert len(result[0]) == 2
|
||||
|
||||
@patch("src.utils.data_loader.load_yaml")
|
||||
@patch("src.utils.data_loader.load_questionnaire_scales")
|
||||
def test_wave_info_returns_correct_tuple_structure(
|
||||
self, mock_load_scales, mock_load_yaml
|
||||
):
|
||||
mock_load_yaml.return_value = {
|
||||
"questionnaires": [{"name": "q1", "path": "/path/q1.yaml"}]
|
||||
}
|
||||
mock_load_scales.return_value = {"scale1": {"questionnaire": "q1"}}
|
||||
|
||||
settings = {"questionnaire_directory": "/base"}
|
||||
result = assemble_wave_info("wave_config.yaml", settings)
|
||||
|
||||
assert isinstance(result, tuple)
|
||||
assert len(result) == 4
|
||||
assert isinstance(result[0], dict) # scale_dictionary
|
||||
assert isinstance(result[1], dict) # final_subgroup_scales
|
||||
assert isinstance(result[2], set) # excluded_scales
|
||||
assert isinstance(result[3], dict) # composite_scales
|
||||
266
tests/test_database_populator.py
Normal file
266
tests/test_database_populator.py
Normal file
@ -0,0 +1,266 @@
|
||||
import pytest
|
||||
import sqlite3
|
||||
import pandas as pd
|
||||
import tempfile
|
||||
import os
|
||||
from unittest.mock import patch, MagicMock
|
||||
from src.utils.database_populator import populate_database
|
||||
|
||||
|
||||
class TestPopulateDatabase:
|
||||
@staticmethod
|
||||
def single_wave_data_creates_correct_table():
|
||||
test_data = {1: pd.DataFrame({"col1": [1, 2], "col2": ["a", "b"]})}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
|
||||
tables = cursor.fetchall()
|
||||
|
||||
assert ("wave1",) in tables
|
||||
|
||||
cursor.execute("SELECT * FROM wave1")
|
||||
rows = cursor.fetchall()
|
||||
assert len(rows) == 2
|
||||
assert rows[0] == (1, "a")
|
||||
assert rows[1] == (2, "b")
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
|
||||
@staticmethod
|
||||
def multiple_waves_create_separate_tables():
|
||||
test_data = {
|
||||
1: pd.DataFrame({"wave1_col": [1, 2]}),
|
||||
2: pd.DataFrame({"wave2_col": [3, 4]}),
|
||||
3: pd.DataFrame({"wave3_col": [5, 6]}),
|
||||
}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
|
||||
tables = [table[0] for table in cursor.fetchall()]
|
||||
|
||||
assert "wave1" in tables
|
||||
assert "wave2" in tables
|
||||
assert "wave3" in tables
|
||||
assert len(tables) == 3
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
|
||||
@staticmethod
|
||||
def empty_dataframe_creates_table_with_no_rows():
|
||||
test_data = {1: pd.DataFrame({"empty_col": []})}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT COUNT(*) FROM wave1")
|
||||
row_count = cursor.fetchone()[0]
|
||||
|
||||
assert row_count == 0
|
||||
|
||||
cursor.execute("PRAGMA table_info(wave1)")
|
||||
columns = cursor.fetchall()
|
||||
assert len(columns) == 1
|
||||
assert columns[0][1] == "empty_col"
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
|
||||
@staticmethod
|
||||
def empty_dictionary_creates_no_tables():
|
||||
test_data = {}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT name FROM sqlite_master WHERE type='table'")
|
||||
tables = cursor.fetchall()
|
||||
|
||||
assert len(tables) == 0
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
|
||||
@staticmethod
|
||||
def existing_database_tables_are_replaced():
|
||||
test_data = {1: pd.DataFrame({"col": [1, 2]})}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("CREATE TABLE wave1 (old_col INTEGER)")
|
||||
cursor.execute("INSERT INTO wave1 VALUES (999)")
|
||||
conn.commit()
|
||||
conn.close()
|
||||
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("SELECT * FROM wave1")
|
||||
rows = cursor.fetchall()
|
||||
|
||||
assert len(rows) == 2
|
||||
assert rows[0] == (1,)
|
||||
assert rows[1] == (2,)
|
||||
|
||||
cursor.execute("PRAGMA table_info(wave1)")
|
||||
columns = cursor.fetchall()
|
||||
assert len(columns) == 1
|
||||
assert columns[0][1] == "col"
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
|
||||
@staticmethod
|
||||
def database_uses_default_path_when_not_specified():
|
||||
test_data = {1: pd.DataFrame({"col": [1]})}
|
||||
default_path = "results/study_results.sqlite"
|
||||
|
||||
with patch("sqlite3.connect") as mock_connect:
|
||||
mock_connection = MagicMock()
|
||||
mock_connect.return_value = mock_connection
|
||||
|
||||
populate_database(test_data)
|
||||
|
||||
mock_connect.assert_called_once_with(default_path)
|
||||
mock_connection.close.assert_called_once()
|
||||
|
||||
@staticmethod
|
||||
def dataframe_with_various_data_types_preserved():
|
||||
test_data = {
|
||||
1: pd.DataFrame(
|
||||
{
|
||||
"int_col": [1, 2],
|
||||
"float_col": [1.5, 2.7],
|
||||
"str_col": ["text1", "text2"],
|
||||
"bool_col": [True, False],
|
||||
}
|
||||
)
|
||||
}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
df_result = pd.read_sql_query("SELECT * FROM wave1", conn)
|
||||
|
||||
assert len(df_result) == 2
|
||||
assert list(df_result.columns) == [
|
||||
"int_col",
|
||||
"float_col",
|
||||
"str_col",
|
||||
"bool_col",
|
||||
]
|
||||
assert df_result["int_col"].iloc[0] == 1
|
||||
assert df_result["str_col"].iloc[1] == "text2"
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
|
||||
@patch("sqlite3.connect")
|
||||
def connection_closed_even_when_exception_occurs(self, mock_connect):
|
||||
mock_connection = MagicMock()
|
||||
mock_connect.return_value = mock_connection
|
||||
mock_connection.__enter__ = MagicMock(return_value=mock_connection)
|
||||
mock_connection.__exit__ = MagicMock(return_value=False)
|
||||
|
||||
test_dataframe = pd.DataFrame({"col": [1, 2]})
|
||||
test_dataframe.to_sql = MagicMock(side_effect=Exception("SQL Error"))
|
||||
|
||||
test_data = {1: test_dataframe}
|
||||
|
||||
with pytest.raises(Exception, match="SQL Error"):
|
||||
populate_database(test_data, "test.db")
|
||||
|
||||
mock_connection.close.assert_called_once()
|
||||
|
||||
@staticmethod
|
||||
def wave_numbers_create_correct_table_names():
|
||||
test_data = {
|
||||
10: pd.DataFrame({"col": [1]}),
|
||||
99: pd.DataFrame({"col": [2]}),
|
||||
1: pd.DataFrame({"col": [3]}),
|
||||
}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute(
|
||||
"SELECT name FROM sqlite_master WHERE type='table' ORDER BY name"
|
||||
)
|
||||
tables = [table[0] for table in cursor.fetchall()]
|
||||
|
||||
expected_tables = ["wave1", "wave10", "wave99"]
|
||||
assert tables == expected_tables
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
|
||||
@staticmethod
|
||||
def dataframe_index_not_stored_in_database():
|
||||
df_with_custom_index = pd.DataFrame({"col": [1, 2]})
|
||||
df_with_custom_index.index = ["row1", "row2"]
|
||||
test_data = {1: df_with_custom_index}
|
||||
|
||||
with tempfile.NamedTemporaryFile(suffix=".sqlite", delete=False) as tmp_file:
|
||||
db_path = tmp_file.name
|
||||
|
||||
try:
|
||||
populate_database(test_data, db_path)
|
||||
|
||||
conn = sqlite3.connect(db_path)
|
||||
cursor = conn.cursor()
|
||||
cursor.execute("PRAGMA table_info(wave1)")
|
||||
columns = [column[1] for column in cursor.fetchall()]
|
||||
|
||||
assert "col" in columns
|
||||
assert "index" not in columns
|
||||
assert len(columns) == 1
|
||||
|
||||
conn.close()
|
||||
finally:
|
||||
os.unlink(db_path)
|
||||
433
tests/test_scale_processor.py
Normal file
433
tests/test_scale_processor.py
Normal file
@ -0,0 +1,433 @@
|
||||
import pytest
|
||||
import pandas as pd
|
||||
import numpy as np
|
||||
from src.scale_processor import ScaleProcessor
|
||||
|
||||
|
||||
class TestScaleProcessor:
|
||||
@staticmethod
|
||||
def initializes_with_basic_scale_config():
|
||||
config = {"name": "test_scale", "items": [{"id": "item1"}, {"id": "item2"}]}
|
||||
processor = ScaleProcessor(config)
|
||||
|
||||
assert processor.name == "test_scale"
|
||||
assert processor.items == [{"id": "item1"}, {"id": "item2"}]
|
||||
assert processor.calculation == "mean"
|
||||
assert processor.score_min == 1
|
||||
assert processor.score_max == 5
|
||||
assert processor.output == "test_scale"
|
||||
assert processor.subgroup is None
|
||||
|
||||
@staticmethod
|
||||
def initializes_with_custom_configuration():
|
||||
config = {
|
||||
"name": "custom_scale",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "sum",
|
||||
"score_range": (0, 10),
|
||||
"response_options": {"1": "Yes", "2": "No"},
|
||||
"output": "custom_output",
|
||||
}
|
||||
processor = ScaleProcessor(config, "group1")
|
||||
|
||||
assert processor.calculation == "sum"
|
||||
assert processor.score_min == 0
|
||||
assert processor.score_max == 10
|
||||
assert processor.response_options == {"1": "Yes", "2": "No"}
|
||||
assert processor.output == "custom_output"
|
||||
assert processor.subgroup == "group1"
|
||||
|
||||
@staticmethod
|
||||
def check_items_passes_when_all_columns_present():
|
||||
config = {"name": "test", "items": [{"id": "col1"}, {"id": "col2"}]}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"col1": [1, 2], "col2": [3, 4], "col3": [5, 6]})
|
||||
|
||||
processor.check_items(df)
|
||||
|
||||
@staticmethod
|
||||
def check_items_raises_error_when_columns_missing():
|
||||
config = {"name": "test", "items": [{"id": "col1"}, {"id": "missing"}]}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"col1": [1, 2], "col2": [3, 4]})
|
||||
|
||||
with pytest.raises(
|
||||
ValueError, match="Missing columns in data: \\['missing'\\]"
|
||||
):
|
||||
processor.check_items(df)
|
||||
|
||||
@staticmethod
|
||||
def get_subgroup_mask_returns_all_true_when_no_subgroup():
|
||||
config = {"name": "test", "items": [{"id": "col1"}]}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"col1": [1, 2, 3]})
|
||||
|
||||
mask = processor.get_subgroup_mask(df)
|
||||
|
||||
assert mask.all()
|
||||
assert len(mask) == 3
|
||||
|
||||
@staticmethod
|
||||
def get_subgroup_mask_returns_all_true_when_subgroup_is_all():
|
||||
config = {"name": "test", "items": [{"id": "col1"}]}
|
||||
processor = ScaleProcessor(config, "all")
|
||||
df = pd.DataFrame({"col1": [1, 2, 3]})
|
||||
|
||||
mask = processor.get_subgroup_mask(df)
|
||||
|
||||
assert mask.all()
|
||||
|
||||
@staticmethod
|
||||
def get_subgroup_mask_filters_by_subgroup_column():
|
||||
config = {"name": "test", "items": [{"id": "col1"}]}
|
||||
processor = ScaleProcessor(config, "group")
|
||||
df = pd.DataFrame({"col1": [1, 2, 3], "group": [True, False, True]})
|
||||
|
||||
mask = processor.get_subgroup_mask(df)
|
||||
|
||||
assert mask.iloc[0] is True
|
||||
assert mask.iloc[1] is False
|
||||
assert mask.iloc[2] is True
|
||||
|
||||
@staticmethod
|
||||
def get_subgroup_mask_returns_all_true_when_subgroup_column_missing():
|
||||
config = {"name": "test", "items": [{"id": "col1"}]}
|
||||
processor = ScaleProcessor(config, "nonexistent")
|
||||
df = pd.DataFrame({"col1": [1, 2, 3]})
|
||||
|
||||
mask = processor.get_subgroup_mask(df)
|
||||
|
||||
assert mask.all()
|
||||
|
||||
@staticmethod
|
||||
def process_calculates_mean_by_default():
|
||||
config = {"name": "test", "items": [{"id": "q1"}, {"id": "q2"}]}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [2, 4, 6], "q2": [4, 6, 8]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result.columns[0] == "test"
|
||||
assert result["test"].iloc[0] == 3.0
|
||||
assert result["test"].iloc[1] == 5.0
|
||||
assert result["test"].iloc[2] == 7.0
|
||||
|
||||
@staticmethod
|
||||
def process_calculates_sum_when_specified():
|
||||
config = {
|
||||
"name": "sum_scale",
|
||||
"items": [{"id": "q1"}, {"id": "q2"}],
|
||||
"calculation": "sum",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2, 3], "q2": [4, 5, 6]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["sum_scale"].iloc[0] == 5
|
||||
assert result["sum_scale"].iloc[1] == 7
|
||||
assert result["sum_scale"].iloc[2] == 9
|
||||
|
||||
@staticmethod
|
||||
def process_handles_item_inversion():
|
||||
config = {
|
||||
"name": "inverted",
|
||||
"items": [{"id": "q1", "inverse": True}, {"id": "q2"}],
|
||||
"score_range": (1, 5),
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 5], "q2": [3, 3]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["inverted"].iloc[0] == 4.0 # (5+1-1+3)/2 = 4
|
||||
assert result["inverted"].iloc[1] == 2.0 # (5+1-5+3)/2 = 2
|
||||
|
||||
@staticmethod
|
||||
def process_handles_categorical_calculation_single_item():
|
||||
config = {
|
||||
"name": "category",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "categorical",
|
||||
"response_options": {"1": "Option A", "2": "Option B", "3": "Option C"},
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2, 3, 1]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["category"].iloc[0] == "Option A"
|
||||
assert result["category"].iloc[1] == "Option B"
|
||||
assert result["category"].iloc[2] == "Option C"
|
||||
assert result["category"].iloc[3] == "Option A"
|
||||
|
||||
@staticmethod
|
||||
def process_raises_error_for_categorical_with_multiple_items():
|
||||
config = {
|
||||
"name": "category",
|
||||
"items": [{"id": "q1"}, {"id": "q2"}],
|
||||
"calculation": "categorical",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2], "q2": [1, 2]})
|
||||
|
||||
with pytest.raises(
|
||||
ValueError, match="calculation 'categorical' is only for single-item scales"
|
||||
):
|
||||
processor.process(df)
|
||||
|
||||
@staticmethod
|
||||
def process_handles_categorical_with_open_ended_other_option():
|
||||
config = {
|
||||
"name": "category",
|
||||
"items": [{"id": "q1", "open_ended_id": "q1_other"}],
|
||||
"calculation": "categorical",
|
||||
"response_options": {"1": "Option A", "10": "Other"},
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame(
|
||||
{
|
||||
"q1": [1, 10, 1, 10],
|
||||
"q1_other": ["", "Custom text", "", "Another custom"],
|
||||
}
|
||||
)
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["category"].iloc[0] == "Option A"
|
||||
assert result["category"].iloc[1] == "Other"
|
||||
assert pd.isna(result["category_other_text"].iloc[0])
|
||||
assert result["category_other_text"].iloc[1] == "Custom text"
|
||||
assert pd.isna(result["category_other_text"].iloc[2])
|
||||
assert result["category_other_text"].iloc[3] == "Another custom"
|
||||
|
||||
@staticmethod
|
||||
def process_handles_ordinal_calculation_single_item():
|
||||
config = {
|
||||
"name": "ordinal",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "ordinal",
|
||||
"response_options": {1: "Low", 2: "Medium", 3: "High"},
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2, 3, 2]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["ordinal"].iloc[0] == "Low"
|
||||
assert result["ordinal"].iloc[1] == "Medium"
|
||||
assert result["ordinal"].iloc[2] == "High"
|
||||
assert result["ordinal"].iloc[3] == "Medium"
|
||||
|
||||
@staticmethod
|
||||
def process_raises_error_for_ordinal_with_multiple_items():
|
||||
config = {
|
||||
"name": "ordinal",
|
||||
"items": [{"id": "q1"}, {"id": "q2"}],
|
||||
"calculation": "ordinal",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2], "q2": [1, 2]})
|
||||
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="calculation 'ordinal' only allowed with single-item scales",
|
||||
):
|
||||
processor.process(df)
|
||||
|
||||
@staticmethod
|
||||
def process_handles_response_calculation_single_item():
|
||||
config = {
|
||||
"name": "response",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "response",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1.5, 2.7, 3.9]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["response"].iloc[0] == 1.5
|
||||
assert result["response"].iloc[1] == 2.7
|
||||
assert result["response"].iloc[2] == 3.9
|
||||
|
||||
@staticmethod
|
||||
def process_raises_error_for_response_with_multiple_items():
|
||||
config = {
|
||||
"name": "response",
|
||||
"items": [{"id": "q1"}, {"id": "q2"}],
|
||||
"calculation": "response",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2], "q2": [1, 2]})
|
||||
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="calculation 'response' can only be used with single-item scales!",
|
||||
):
|
||||
processor.process(df)
|
||||
|
||||
@staticmethod
|
||||
def process_handles_sum_correct_calculation():
|
||||
config = {
|
||||
"name": "correct_sum",
|
||||
"items": [
|
||||
{"id": "q1", "correct": 2},
|
||||
{"id": "q2", "correct": 1},
|
||||
{"id": "q3", "correct": 3},
|
||||
],
|
||||
"calculation": "sum_correct",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame(
|
||||
{
|
||||
"q1": [2, 1, 2], # correct, wrong, correct
|
||||
"q2": [1, 1, 2], # correct, correct, wrong
|
||||
"q3": [3, 2, 3], # correct, wrong, correct
|
||||
}
|
||||
)
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["correct_sum"].iloc[0] == 3 # all correct
|
||||
assert result["correct_sum"].iloc[1] == 1 # one correct
|
||||
assert result["correct_sum"].iloc[2] == 2 # two correct
|
||||
|
||||
@staticmethod
|
||||
def process_handles_mean_correct_calculation():
|
||||
config = {
|
||||
"name": "correct_mean",
|
||||
"items": [{"id": "q1", "correct": 1}, {"id": "q2", "correct": 2}],
|
||||
"calculation": "mean_correct",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame(
|
||||
{
|
||||
"q1": [1, 1, 2], # correct, correct, wrong
|
||||
"q2": [2, 1, 2], # correct, wrong, correct
|
||||
}
|
||||
)
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["correct_mean"].iloc[0] == 1.0 # 2/2 = 1.0
|
||||
assert result["correct_mean"].iloc[1] == 0.5 # 1/2 = 0.5
|
||||
assert result["correct_mean"].iloc[2] == 0.5 # 1/2 = 0.5
|
||||
|
||||
@staticmethod
|
||||
def process_raises_error_for_unknown_correct_calculation():
|
||||
config = {
|
||||
"name": "test",
|
||||
"items": [{"id": "q1", "correct": 1}],
|
||||
"calculation": "unknown_correct",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2]})
|
||||
|
||||
with pytest.raises(
|
||||
ValueError, match="Unknown calculation for objective items: unknown_correct"
|
||||
):
|
||||
processor.process(df)
|
||||
|
||||
@staticmethod
|
||||
def process_raises_error_for_unknown_calculation_type():
|
||||
config = {"name": "test", "items": [{"id": "q1"}], "calculation": "unknown"}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2]})
|
||||
|
||||
with pytest.raises(ValueError, match="Unknown calculation: unknown"):
|
||||
processor.process(df)
|
||||
|
||||
@staticmethod
|
||||
def process_applies_subgroup_filtering():
|
||||
config = {
|
||||
"name": "filtered",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "response",
|
||||
}
|
||||
processor = ScaleProcessor(config, "group")
|
||||
df = pd.DataFrame({"q1": [10, 20, 30], "group": [True, False, True]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["filtered"].iloc[0] == 10
|
||||
assert pd.isna(result["filtered"].iloc[1])
|
||||
assert result["filtered"].iloc[2] == 30
|
||||
|
||||
@staticmethod
|
||||
def process_handles_missing_values_in_mean_calculation():
|
||||
config = {"name": "with_na", "items": [{"id": "q1"}, {"id": "q2"}]}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, np.nan, 3], "q2": [2, 4, np.nan]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["with_na"].iloc[0] == 1.5 # (1+2)/2
|
||||
assert result["with_na"].iloc[1] == 4.0 # only q2 value
|
||||
assert result["with_na"].iloc[2] == 3.0 # only q1 value
|
||||
|
||||
@staticmethod
|
||||
def process_handles_missing_values_in_categorical_calculation():
|
||||
config = {
|
||||
"name": "category_na",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "categorical",
|
||||
"response_options": {"1": "Yes", "2": "No"},
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, np.nan, 2]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert result["category_na"].iloc[0] == "Yes"
|
||||
assert pd.isna(result["category_na"].iloc[1])
|
||||
assert result["category_na"].iloc[2] == "No"
|
||||
|
||||
@staticmethod
|
||||
def process_uses_custom_output_name():
|
||||
config = {
|
||||
"name": "original_name",
|
||||
"items": [{"id": "q1"}],
|
||||
"output": "custom_output",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2, 3]})
|
||||
|
||||
result = processor.process(df)
|
||||
|
||||
assert "custom_output" in result.columns
|
||||
assert "original_name" not in result.columns
|
||||
|
||||
@staticmethod
|
||||
def process_raises_error_for_ordinal_without_response_options_dict():
|
||||
config = {
|
||||
"name": "ordinal",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "ordinal",
|
||||
"response_options": ["Not a dict"],
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2]})
|
||||
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="For calculation 'ordinal', response_options must be a dict mapping",
|
||||
):
|
||||
processor.process(df)
|
||||
|
||||
@staticmethod
|
||||
def process_raises_error_for_categorical_without_response_options_dict():
|
||||
config = {
|
||||
"name": "categorical",
|
||||
"items": [{"id": "q1"}],
|
||||
"calculation": "categorical",
|
||||
"response_options": "Not a dict",
|
||||
}
|
||||
processor = ScaleProcessor(config)
|
||||
df = pd.DataFrame({"q1": [1, 2]})
|
||||
|
||||
with pytest.raises(
|
||||
ValueError,
|
||||
match="response_options must be a dict for calculation 'categorical'",
|
||||
):
|
||||
processor.process(df)
|
||||
Loading…
x
Reference in New Issue
Block a user