Evaluate EvallWeb Portlet
Select the benchmark you want to explore
IberEval 2018 MEX-A3T: Authorship and aggressiven... MEX-A3T: Authorship and aggressiveness analysis in Twitter
Aggressiveness Identification | CLEF 2014 RepLab Author Categorisation | IberEval 2018 MEX-A3T: Authorship and aggressiven... MEX-A3T: Authorship and aggressiveness analysis in Twitter Author Profiling Location | IberEval 2018 MEX-A3T: Authorship and aggressiven... MEX-A3T: Authorship and aggressiveness analysis in Twitter Author Profiling Occupation | |||
CLEF 2014 RepLab Author Ranking | CLEF 2013 ImageCLEF Case-Based Textual Retrieval | IberEval 2017 Classification Of Spanish Election ... Classification Of Spanish Election Tweets COSET | SemEval 2013 Extraction of Drug-Drug Interaction... Extraction of Drug-Drug Interactions from BioMedical Texts DDIExtraction | |||
CLEF 2013 RepLab Filtering | IberEval 2017 Stance and Gender Detection in Twee... Stance and Gender Detection in Tweets on Catalan Independence Gender Detection Catalan | IberEval 2017 Stance and Gender Detection in Twee... Stance and Gender Detection in Tweets on Catalan Independence Gender Detection Spanish | IberEval 2018 MultiStanceCat MultiStance Detection Catalan | |||
IberEval 2018 MultiStanceCat MultiStance Detection Spanish | CLEF 2014 RepLab Reputation Dimensions | CLEF 2013 RepLab Reputational Polarity | SEPLN 2016 TASS Sentiment Analysis 4 classes | |||
SEPLN 2016 TASS Sentiment Analysis 6 classes | ||||||
Select the configuration for the evaluation
![]() |
![]() | |||
Default | Customized | |||
Recommended configuration. EvALL will select the appropriate settings for you. | Choose the set of metrics you want to consider, or set the parameters of the metrics. |
Select the set of metrics for the evaluation
![]() |
![]() |
![]() | ||||
Official set of metrics | Full set of metrics | Customized set of metrics | ||||
Those prescribed in the test collection / evaluation campaign.. | Including official evaluation metrics and also all metrics recommended by the EvALL toolkit. | Choose the set of metrics you want to consider, or set the parameters of the metrics. |
Select from the EvALL repository the system to be compared
![]() |
![]() | |||
Best system in EvALL repository | Select the system from the EvALL repository | |||
Best and average system in the EvALL repository for this benchmark. | Select the system from the list of systems stored in EvALL for this benchmark. |
Select the settings for the evaluation report
![]() |
![]() |
![]() |
![]() | |||||
Generate pdf/latex report | Generate tsv report | Add metric descriptions | Add output verifications | |||||
Generate pdf/lates report. | Generate tsv report. | Include explanations and definitions for each of the metrics. | Include the results of the verification step for each of the inputs you provide (with warnings in case of inconsistent format). |
Results of the evaluation for the selected configuration
No preview
![]() | ![]() | ![]() | ||||
Report | Latex project | TSV files |