FAQ

These applications are reviewed by agents and, as such, any bot-like indicators will lead to rejection of your application. Preferably use your academic institution email address, if you have one. Fill in the fields responsibly and clarify in your use case that you intend to participate in this challenge.

These applications are reviewed by agents and, as such, any bot-like indicators will lead to rejection of your application. Preferably use your academic institution email address, if you have one. Fill in the fields responsibly and clarify in your use case that you intend to participate in this challenge.

To verify your account you can use the following link

You can reset your password here

Fields in each data entry are separated by the 1 character (0x31 in UTF-8). A code snippet to help you read the data can be found here

.

You need to provide one submission file for each engagement type (4 in total). The submission can be made here, provided that you are already logged in. The submission files should be in csv format and provide a line for each row in the test set consisting of: <Tweet_Id>,<User_Id>,<Prediction>.

If you violate any of the challenge T&Cs, e.g. single out a Tweet from the dataset and share it publicly or in this forum, you will be disqualified and won’t be able to participate in the challenge. Preserving user privacy is an integral part of this challenge.

A code snippet is provided here to help you compute relative cross entropy (RCE) and area under precision recall curve (PR-AUC) for each engagement.

The RCE and PR-AUC values are averaged across the four different engagements. Participants will then be ranked based on those averaged metrics and the sum of these two ranks will be used to obtain their overall ranking score.