Introduction and evaluation of the peer evaluation tool
The aim of the present paper is to introduce an automatic peer evaluation tool for web-based examination and peer evaluation system. The peer evaluations as they are, they won't give sufficiently reliable grading, since the students performing peer evaluations are on different levels. The automated grading process tries to fix this problem through an imperative algorithm, which guides the grading procedure. The algorithm calculates credibility factors for each student. The credibility factor is based on the grades given by other students in peer evaluation. After this the algorithm 'corrects' the given grades by putting more weight for grading given by a student with high credibility value. In the earlier research conducted and reported  last year introduced a web-based examination application with a peer evaluation tool. The tool took some distance from traditional exams by providing time and place independency and also provided efficient peer evaluation tool. This tool was used to test automated grading algorithm. The grades created manually in the web-based exam were compared to ones calculated with the algorithm. The results seemed to correlate quite well, although there where some anomalies found when the evaluation of the system was done. Currently it is possible to isolate these problematic answers and continue the grading manually. When these problematic answers were removed, the results given by the automatic grading algorithm were very good.