18 / 18
Nov 2007

WSCPO stands for "Weighted Score, Classical Problemset Only (at present smile )". The formula is calculated in a similar way as that in the DASM League, and forms the basis of the main spoj ranking. In due course, it will be extended to include challenge problems. Since this requires some extra code to implement efficiently we will only introduce it in several months (once all the other rewriting work on the site is complete).

The current formula used to determine the "value" of any classical problem p is: 80/(40+number of people who have solved p). In order to see the number of points you will receive if you solve p as the next person, please use your mouse to highlight the number of people who've solved it so far (when viewing the problemlist).

Honestly speaking, I (ZZZ) don't like the whole idea (EWSCPO) at all. It's fuzzy, it's subjective and it's even slightly irritating. It can encourage massive registering of dirty accounts (btw, I hate to point it out, what the hell had happened with the guys from VN? Looks as if a half of Vietnam's population got involved in problems solving).

In short, the number of solved problems (they all are hard and nice) is quite enough for the purpose of ranking. Nothing personal, just my humble opinion.

Btw, personally I count all of you (SPOJ creators, its problemsetters, its top 10 users) for REAL GENIUSES.

ZZZ, this formula simply assigns a weight of between 0.1 and 2 to every problem. I am not saying the weighted formula is perfect; as a matter of fact (as you can probably see) it only affects current standings a little.

But I see no way it could easily be broken through dirty accounts. I dare say it actually discourages them - solving problems from 2 accounts at a time marginally lowers your rating.

As for subjectivity - it will have to be there in the end, to allow us to introduce points for challenges, and perhaps small boosts for one or two other things... Of course we can keep separate ratings in all these categories, but its good to describe it all by an overall joint number. We will see what to do about it in 2-3 months. For the moment nothing changes.

As far as users are concerned -- everyone has a right to have their own opinion... In my opinion users from the Asia Pacific are doing a great job, actually. But I would be the last to object if you were to invite lots of new people from Belarus to join in the fun, and rise to top ranks of course smile.

OK, Adrian! Everything is OK & nice.

I (ZZZ) used to easily agree with any other reasonable opinion.

ZZZ, all feedback is taken into account.

We will probably start serious coding work in about a month. Once something sensible is up and running, it will be put up as a separate beta site for testing purposes. At this stage I cannot guarantee what form of ranks the new site will feature. But if you are dissatisfied with what you see then you can always put forward alternative suggestions.

I got it, Adrian! Thank you!

Interesting, why other spoj users keep silence on this matter?

Maybe they really don't mind theirs more "granular" and "precise" rankening?

Well, one reason is that there's not so many spoj users that look around here, maybe another is that people would rather have a higher point value on harder problems.

Regarding the potential ranking scheme, I think that the idea "the fewer solutions, the more points" should not be implemented (if more points is a good thing, of course).

While there is a certain spammishness about submitting solutions en masse and milking the last miliseconds from them, I think that one of the great things about SPOJ is that allows one to experiment without any kind of reprisals rank-wise. Many test cases are not trivial to mimic, too (I can't think of a method right now to create a large, valid CMEXPR test case file, for example). What if a person had some silly compilation/runtime error? Such mistakes happen, especially if it's 2 AM and one was hacking 6 hours consecutively to get PRIME1 to work with Python wink. And don't forget that a correct algorithm in a relatively slow language may not run fast enough without optimizations (again, like PRIME1), and it's difficult to say exactly beforehand if a given solution will terminate in time or not. Finally, this will encourage multiple accounts from the same person (i.e., solve one problem with drafts, then submit the best one using the other account).

I'm speaking from an entirely egotistical perpective, of course, as it'll hurt my ranking relative to the C-people because I code only in Python wink).

We've given the matter some thought, and this idea most certainly will not be implemented.

What we are proposing is assigning a weight to the problem itself (based on its difficulty, expressed e.g. as the inverse of the number of users who've solved it), rather than to a particular user's approach to the problem.

On the whole the scheme is resistant to dirty accounts. The only possible hazard seems to be a situation of this sort: persons A and B know how to solve problem X, person C does not. Persons B and C are neck'n'neck in the ranks. So, if person A wishes to help person C, he/she creates a load of new accounts and uses them to decrease the difficulty rating of problem X, thus breaking the tie in favor of person C.

Another suggestion that might be implemented one day is that only users who provide some fairly tustworthy form of identification (e.g. a postal address, etc.) will be included in the ranks.

Finally, if the site is to become a little more contest-oriented, we will have to draw a clear dividing line between the user's rank in the online judge, and the user's rank based on overall activity and successes in all official contests. In due course, once these ideas ripen a little and when we can provide clear, sensible formulae for ranks, they will be put under public discussion.

To clarify (I think I understand what you're talking about and everyone else doesn't here) - the "score" you get for solving a problem is a constant that's inversely proportional to the number of people (or accounts I guess) that have solved it already. It has nothing to do with the number of times those people solved it, the number of languages they've solved it in, the number of failed submissions you or anyone else made on the problem, or anything else but the number of people who have solved it. Correct?

(Perfectly correct. -- Adrian)

1 month later

It would be good, if for each problem in a separate column the quantity of points which for it now can be received was displayed.
I think, that is not so convenient, when do not know, how many will receive the decision of a problem.

I agree. We will think of some place to squeeze it in.

The current formula for any classical problem p is: 80/(40+number of people who have solved p). At present, in order to see the number of points you will receive if you solve p as the next person, please use your mouse to highlight the number of people who've solved it so far (when viewing the problemlist).

3 months later

Hmm, maybe it should be done in another way? I think that number of points that one gets should depend of position of one's solution in this given problem ranks, do you agree?

1 year later

In my opinion, the problem with that weighted system is in what it's based upon: Quantity of answers instead of their quality.

For example, one may solve the second most solved question in the system, FCTRL, with a very crude code that takes a lot of time and resources, but it still manages to get accepted.

Another one may do it in O(1) after an extensive mathematical analysis.

Who'd you reward the most points? And how much?

Even if speed and use of resources isn't the best metric for analyzing a well done code, perhaps the rating system can be based in how much better did you fare versus the average score of everyone in that problem.

Another second metric could be an user-rated problem system, where selected users, or top ranked and active users could vote if the problem is extremely difficult or trivial.

2 months later

I think that renrutal's proposed system (and indeed many similar suggestions) defeat the purpose of having so many languages available. Think for a minute, if we ranked only according to speed, nobody would use Python or Ruby because the programs would take ages (relative to C) to run. Yet, Python, Ruby, Perl, etc. all make excellent programming languages due to ease of use, and the fact that a good program in those languages (an efficient one) will be far better than a bad algorithm in a fast language. If we went to speed competition as opposed to "difficulty based on number of successful solvers", we would see that many of the Python/Ruby/Java/etc. programmers are turned away, because they cannot compete realistically with C/Pascal/C++ programmers.

Then you could base the score per language too.

That way, the same problem can be rated trivial for Python programmers and very difficult for Prolog programmers, and if resolved, the points could be given accordingly, more to Prolog than to Python.

Two C++ users would compete for speed and memory use with other C++ users only.

9 months later

Why 80,40? Why not, say, 60,30 or 100,50? What do these numbers mean?

Absolutely nothing. These values simply lead to a distribution of points which seems to us to be "balanced". The maximum value of a problem is 2, the lower bound is 0, and the mean is about 1 point.