14 / 44
May 2009

I think it was some bug triggered by a combination of empty "output" files, multiple test cases and some output caching problems...
Could you please check if it's ok now?

5 months later
12 months later

There a judge who dont ignore whitespaces and EOL ?, a judge that just do a file compare between the user solution and the original one ?.. thanks

2 months later

I need some help. I don't understand source code at spoj.pl/files/judge/1000/8 ? As far as I understand, Master judge is used to calculate score, not to check particular test case, but spoj_p_in is input file for the problem. It is surely that I don't understand, but I can't figure out why it is sourced there. Can someone explain me, please?

Should I add checker for test cases in the section "test data upload" ? I will write my own checker, because there is no unique output for problem that I would like to set.

Also, code at spoj.pl/files/judge/3/7 seems, IMHO, a bit complicated. Can I read double just like

double val;
fscanf(spoj_t_out, "%lf", &val);

Thank you in advance.

The best regards,

3 months later
1 year later

Status:
AC - accepted
WA - wrong answer
CE - compilation error
RE - runtime error
TLE - time limit exceeded

What are the others?
(SIGSEGV, NZEC, SIGXFSZ, SIGABRT, SIGFPE)

Edit:
SIGSEGV - status="RE" && sig=11
SIGXFSZ - status="RE" && sig=25
SIGABRT - status="RE" && sig=6
SIGFPE - status="RE" && sig=8

For example: fprintf(spoj_score, "RE 0 6 0 0\n"); // SIGABRT

I still don't know NZEC and SIGKILL.

Edit2:
SIGKILL - status="RE" && sig=9
other SIG's: http://members.chello.pl/prosiak/sle_PL.pdf5

Just NZEC please.

NZEC: status=RE, signal=-1

In normal situation judge uses only AC/WA/SE. Some times (for special problems like judge=interpreter) it can use CE/RE/TLE/MLE. But I don't think it is a good idea to set signals by judge.

4 months later

Could You describe "Test data multiupload"?
Because it's a new option but wasn't neither added into tutorial for psetters nor here.
(updating tests section is also new but I think that it's obvious, how to use it; but You may also describe it if You want)

Test data multiupload is a beta functionality at the moment. It allows to import input and output for test cases - check the export feature to see the format. But please notice that importing supports only in/out data, as opposite to export feature.

1 month later
1 month later

And one more question.

I want to make a task with the masterjudge: "1001. Score is % of correctly solved sets".
So I chose this masterjudge, changed "assessment type" to "maximize score".
The task works corractly but all users which get AC get maximum points for the task even if they have 0/100 pts.
How to fix this?
I have to write my own judge also?

1 month later

Based on the information in this thread, I've written a generic custom judge in Python that tests the correctness of submitted Python source code for a particular problem based on the evaluation of a doctest (read from the problem's input). The judge works as intended, except for the following two cases:

  • The judge can detect run-time errors while performing the doctest, but it cannot instruct SPOJ to give a "run-time error" result. Earlier in this thread it was mentioned that writing
  • 1 = wrong answer
  • 2 = time limit exceeded
  • 3 = compiler error
  • anything outside the range 0-3 seems to generate an "internal error"

Is there an exit status that corresponds with "run-time exception" or is there another way SPOJ can be notified about run-time exceptions?[/*:m]
[li] Is there a way to get access to the time limit assigned to test data from within custom judges (e.g. through environment variables)? This way, custom judges can break the execution of submitted source code if the time limit is exceeded, and generate a "time limit exceeded" result. If this time limits are not managed by custom judges (if no access is given to the time limit set for test data, only hard-coded time limits can be used) after some time an "internal error" result is produced.[/li][/list:u]
24 days later
  1. Judge != Master Judge

  2. You should allow only TEXT submissions (to prevent of execution)

  3. Set 'Problem type' to 'interactive' (more results)

  4. Judge's TLE == IE

  5. For classic problems judge's time limit is const and == 150s, for interactive problems judge's time limit is 2 * solution's time limit + 2s. You can use getrlimit() to determine it.

What information becomes available for what kind of judge indeed wasn't clear for me while writing early prototypes of my judge. My questions were specifically related to judges, not master judges. I had to find out the use of the exit status for judges by trial-and-error. I never found any documentation about that. Up until now it is still unclear to me how to generate a "run-time errors" status from judges. My judge is able to detect run-time errors, but I've found no way to set this status using an appropriate exit status.

My Python-specific judge (using doctests) doesn't suffer from the fact that SPOJ does run the user code. It simply ignores the outcome of that process. My problem with allowing only TEXT submissions (which would prevent an extra execution of the code) is that users (in my case students that are new to programming) submit Python code. Having them submit their code as TEXT would be quite confusing.

One particular reason I've written a generic Python-specific, is to perform tests on source code beyond a test purely based on input/output. It also allows mixing source code submitted by the user with additional source code. For example a skeleton of classes users have to further implement or code that adds extra layers to the source code submitted by the user, e.g. to provide additional feedback or add a graphical layer to it. In my opinion, the design decision to automatically execute the source code provided by the user is to tightly linked with the idea of testing the "correctness" of submitted source code, based only on the output the software generates for a given input. A more flexible solution would use "lazy execution". With that, I mean that the source code of the user is only executed in case the judge requests the output generated by the source code. This would mean that if the judge ignores the output, the code is never executed beyond the control of the judge.

Explain "more results". As far as I understood the principle behind interactive problems, it means that the judge can respond interactively to output generated by the program submitted by the user. This output on it's turn then can be used by the user program. For the moment, I think I don't need this feature, unless I've got it wrong.

This I figured out. However, I would like my Python-specific judge to take into account the time limit that is specified in the problem definition. In order to do that, the judge should be able to access that time limit (e.g. through an environment variable).

The getrlimit() function actually is the answer to my question addressed in point number 4. I can also get the information coming from that system call in Python.

Judge: problem_input, problem_output, tested_output, tested_src.
Master Judge: results of testcases, tested_src.

spoj.h6
spoj_interactive.h5

// spoj.h
#define SPOJ_RV_POSITIVE              0
#define SPOJ_RV_NEGATIVE              1
#define SPOJ_RV_IE                    2
// spoj_interactive.h
#define SPOJ_RV_AC                    0
#define SPOJ_RV_WA	                 1
#define SPOJ_RV_SE                    2
#define SPOJ_RV_CE                    3
#define SPOJ_RV_RE                    4
#define SPOJ_RV_TLE                   5
#define SPOJ_RV_MLE                   6
#define SPOJ_RV_EOF                   7
#define SPOJ_RV_IE                    255

In each testcase judge is executed only if there was no problem with tested (RE/TLE/MLE).

Thanks for the explanation about the extended list of exit statuses when using an interactive job. This indeed helped me out in properly reporting run-time errors in my generic Python judge based on doctests. However (detail): I don't understand the design decision for the SPOJ backend to make a distinction between the supported exit status list for regular judges and interactive judges.

This indeed provides the time limit as I need it. Thanks.

2 months later

[quote="Turbo"]
[b]SPOJ.C[/b]

        *spoj_u_info,   /* additional info - psetter only */
        *spoj_p_info;   /* additional info - psetter and solution's owner */

[..]
[i]*spoj_p_info[/i] - additional info availible to problemsetter (I recomend to use this possibility, you can check for errors or intermediate test data during contest. All stored information availible only to problemsetter).

[..]

SPOJ.H

        *spoj_p_info,   /* additional info - problemsetter only */
        *spoj_u_info;   /* additional info - psetter and solution's owner */

[/quote]
Just to make sure, spoj_p_info, file descriptor 6, is only available to problemsetters and spoj_u_info, file descriptor 7, is available to both users and problemsetters? Note that in the spoj.c example the descriptions are the other way around, but judging on the names I would assume that the one in spoj.h is the correct one.

21 days later
2 months later

HI
I uploaded 2 test cases and for 'RAONE'
after that i uploaded 1 more test case .. When i rejudge only the first 2 test cases were getting judged .. why not the new one ?