# 体验：通过累积相对频率分布改善意见垃圾邮件检测

## Abstract

Over the last years, online reviews became very important since they can influence the purchase decision of consumers and the reputation of businesses, therefore, the practice of writing fake reviews can have severe consequences on customers and service providers. Various approaches have been proposed for detecting opinion spam in online reviews, especially based on supervised classifiers. In this contribution, we start from a set of effective features used for classifying opinion spam and we re-engineered them, by considering the Cumulative Relative Frequency Distribution of each feature. By an experimental evaluation carried out on real data from Yelp.com, we show that the use of the distributional features is able to improve the performances of classifiers.

## Introduction

As illustrated in a recent survey on the history of Digital Spam, the Social Web has led not only to a participatory, interactive nature of the Web experience', but also to the proliferation of new and widespread forms of spam, among which the most notorious ones are fake news and spam reviews, opinion spam. This results in the diffusion of different kinds of disinformation and misinformation, where misinformation refers to inaccuracies that may even originate acting in good faith, while disinformation is false information deliberately spread to deceive. Over the last years, online reviews became very important since they reflect the customers' experience with a product or service and, nowadays, they constitute the basis on which the reputation of an organization is built. Unfortunately, the confidence in such reviews is often misplaced, due to the fact that spammers are tempted to write fake information in exchange for some reward or to mislead consumers for obtaining business advantages. The practice of writing false reviews is not only morally deplorable, as it is misleading for customers and inconvenient for service providers, but it is also punishable by law. Considering both the longevity and the spread of the phenomenon, scholars for years have investigated various approaches to opinion spam detection, mainly based on supervised or unsupervised learning algorithms. Further approaches are based on Multi Criteria Decision Making. Machine learning approaches rely on input data to build a mathematical model in order to make predictions or decisions. To this aim, data are usually represented by a set of features, which are structured and ideally fully representative of the phenomenon being modeled. An effective feature engineering process, i.e., the process through which an analyst uses the domain knowledge of the data under investigation to prepare appropriate features, is a critical and time-consuming task. However, if done correctly, feature engineering increases the predictive power of algorithms by facilitating the machine learning process.

## 简介

In this paper, we do not aim to contribute by defining novel features suitable for fake reviews detection, rather, starting from features that have been proven to be very effective by Academia, we them, by considering the distribution of the occurrence of the features values in the dataset under analysis. In particular, we focus on the Cumulative Relative Frequency Distribution of a set of the basic features already employed for the task of fake review detection. We compute this distribution for each feature and substitute each feature value with the corresponding value of the distribution. To demonstrate the effectiveness of the proposed approach, the and the ones have been exploited to train several supervised machine-learning classifiers and the obtained results have been compared. To the best of the authors' knowledge, this is the first time that Cumulative Relative Frequency Distribution of a set of features has been considered for the unveiling of fake reviews. The experimental results show that the distributional features improve the performances of the classifiers, at the mere cost of a small computational surplus in the feature engineering phase. The rest of the paper is organized as follows. The next section revises related work in the area. Sectionfeatures describes the process of feature engineering. In Sectionsetup, we present the experimental setup, while Sectionresults reports the results of the comparison among the classification algorithms. Moreover, in this section, we assess the importance of the distributional features and discuss about the benefits brought by their adoption. Finally, Sectionconcl concludes the paper.

Social Media represent the perfect means for everyone to spread contents in the form of User-Generated Content (UGG), almost without any traditional form of trusted control. Since years, Academia, Industry, and Platform Administrators have been fighting for developing automatic solutions to raise the users' awareness about the credibility of the news they read online. One of the contexts in which the problem of credibility assessment is receiving the most interest is spam - or fake - reviews detection. The existence of spam reviews has been known since the early 2000s when e-commerce and e-advice sites began to be popular. In his seminal work, Liu lists three approaches to automatically identify opinion spam: the supervised, unsupervised, and group approaches. In a standard supervised approach, a ground truth of a priori known genuine and fake reviews is needed. Then, features about the labeled reviews, the reviewers, and the reviewed products are engineered. The performances of the first models built on such features achieved good results with common algorithms such as Naive Bayes and Support Vector Machines. As usual, a supervised approach is particularly challenging since it requires the existence of labeled data, that is, in our scenario, a set of reviews with prior knowledge about their (un)trustworthiness. To overcome the frequent issue of lack of labeled data, in the very first phases of investigation in this field, the work done by Jindal et al. in exploited the fact that a common practice of fraudulent reviewers was to post almost duplicate reviews: reviews with similar texts were collected as fake instances. As shown in , linguistic features have been proven to be valid for fake reviews detection, particularly in the early advent of this phenomenon. Indeed, pioneer fake reviewers exhibited precise stylistic features in their texts, such as a marked use of short terms and expressions of positive feelings. Anomaly detection was also been widely employed in this field: an analysis of anomalous practices with respect to the average behavior of a genuine reviewer led to good results. Anomalous behavior of the reviewer may be related to general and early rating deviation, as highlighted by Liu in, or temporal dynamics (see Xie et al.). Going further with the useful methodologies, human annotators, possibly recruited from crowd-sourcing services like Amazon Mechanical Turk, have also been employed, both 1) to manually label reviews' sets to separate fake from non-fake reviews (e.g., see the very recent survey by Crawford et al. in) and 2) to let them write intentionally false reviews, in order to test the accuracy of existing predictive models on such set of ad hoc crafted reviews, as nicely reproduced by Ott et al. in. Recently, an interesting point of view has been offered by Cocarascu and Tonotti in: deception is analysed based on contextual information derivable from review texts, but not in a standard way, e.g., considering linguistic features, but evaluating the influence and interactions that one text has on the others. The new feature, based on bipolar argumentation on the same review, has been shown to outperform more traditional features, when used in standard supervised classifiers, and even on small datasets.

## 相关工作

Supervised learning algorithms usually need diverse examples - and the values of diverse features derived from such examples - for an accurate training phase. Wang et al. investigated the 'cold-start' problem: the identification of a fake review when a new reviewer posts one review. Without enough data about the stylistic features of the review and the behavioral characteristics of the reviewer, the authors first find similarities between the review text under investigation and other review texts. Then, they consider similar behavior between the reviewer under investigation and the reviewers who posted the identified reviews. A model based on neural networks proves to be effective to approach the problem of lack of data in cold-start scenarios. Although many years have passed and, as we will see briefly later, the problem has been addressed in many research works, with different techniques, automatically detecting a false review is an issue not completely solved yet, as stated in the recent survey of Wu et al.. This inspiring work examines the phenomenon not only giving an overview of the various detection techniques used over time, but also proposing twenty future research questions. Notably, to help scholars find suitable datasets for a supervised classification task, this survey lists the currently available review datasets and their characteristics. A similar work by Hussain et al, aimed at a comparison of different approaches, focuses on the performances obtained by different classification frameworks. Also, the authors carried on a relevance analysis of six different behavioral features of reviewers. Weighting the features with respect to their relevance, a classification over a baseline dataset obtains an 84.5% accuracy. A quite novel work considers the unveiling of malicious reviewers by exploiting the notion of `neighborhood of suspiciousness'. In, Kaghazgaran et al. proposed a system called TwoFace that, starting from identifiable reviewers paid by crowd-sourcing platforms to write fake reviews on well-known e-commerce platforms, such as Amazon, studies the similarity between these and other reviewers, based, e.g., on the reviewed products, and shows how it is possible to spot organized fake reviews campaigns even when the reviewers alternate genuine and malicious behaviors. Serra et al. developed a supervised approach where the task is to differentiate amongst different kinds of reviewers, from fraudulent, to uninformative, to reliable. Leveraging a supervised classification approach based on a deep recurrent neural network, the system achieves notable performances over a real dataset where there is an a priori knowledge of the fraudulent reviewers. The research work reminded so far lies in supervised learning. However, unsupervised techniques have been employed too, since they are very useful when no tagged data is available.

Fake reviewers' coordination can emerge by mining frequent behavioral patterns and ranking the most suspicious ones. A pioneer work by first identifies groups of reviewers that reviewed the same set of products; then, the authors compute and aggregate an ensemble of anomaly scores (e.g., based on similarity amongst reviews and times at which the reviews have been posted): the scores are ultimately used to tag the reviewers as colluding or not. Another interesting approach for the analysis of colluding users is the one proposed by: the authors check whether a given group of accounts (e.g., reviewers on ) contains a subset of malicious accounts. The intuition behind this methodology is that the statistical distribution of reputation scores (e.g., number of friends and followers) of the accounts participating in a tampered computation significantly diverges from that of untampered ones. We close this section by referring back to the division made by Liu in about supervised, unsupervised, and group approaches to spot fake reviewers and/or reviews. As noted in , these are classification methods, mostly aiming at classifying in a binary or multiple way information items (i.e., credible vs non-credible)' with the evaluation of a series of credibility features extracted from the data. Notably, approaches, which are based on some prior domain knowledge, are promising in providing a ranking of the information item (i.e., in our scenario, of the review) with respect to credibility. This is the case of recent work by Pasi et al., which exploits a Multi-Criteria Decision Making approach to assess the credibility of a review. In this context, a given review, seen as an alternative amongst others, is evaluated with respect to some credibility criteria. An overall credibility estimate of the review is then obtained by means of a suitable model-driven approach based on aggregation operators. This approach has also the advantage of assessing the contribution that single or interacting criteria/features have in the final ranking. The techniques presented above have their pros and cons, and depending on the particular context, one approach can be preferred with respect to another. The most relevant contribution of our proposal with respect to the state of the art is to improve the effectiveness of the solution based on supervised classifiers, which, as seen above, is a well-known and widely-used approach in this context.

## Feature Engineering

In this section, we introduce a subset of features that have been adopted in past work to detect opinion spam and we propose how to modify them in order to improve the performances of classifiers. We emphasize that the listed features have been used effectively for this task by past researchers. We give below the rationale for their use in the context of unveiling fake reviews. Finally, it is worth noting that the list of selected features is not intended to be exhaustive.

## 功能工程

### Basic Features

Following a supervised classification approach, the selection of the most appropriate features plays a crucial role, since they may considerably affect the performance of the machine learning models constructed starting from them. Features can be review-centric or reviewer-centric. The former are features that refer to the review, while the latter refer to the reviewer. In the literature, several reviewer-centric features have been investigated, such as the maximum number of reviews, the percentage of positive reviews, the average review length, the reviewer rating deviation. According to the outcomes of several works proposed in the context of opinion spam detection , we focused on reviewer-centric features, which have been demonstrated to be more effective for the identification of fake reviews. Thus, we relied on a set of , which have been already used proficiently in the literature for the detection of opinion spam in reviews. Specifically, we focused on the following reviewer-centric features: - {Photo Count}: This metric measures the number of pictures uploaded by a reviewer and is directly retrieved from the reviewer profile. In, the authors demonstrated the effectiveness of using photo count, together with other non-verbal features, for detecting fake reviews. - {Review Count}: It measures how many reviews have been posted by a reviewer on the platform. showed that spammers and non-spammers present different behavior regarding the number of reviews they post. In particular, spammers usually post more reviews, since they may get paid. This feature has also been investigated by and . - {Useful Votes}: The most popular online review platforms allow users to rank reviews as useful or not. This information can be retrieved from the reviewer profile, or computed by summing the total amount of useful votes received by a reviewer. This feature has already been exploited by and it has been demonstrated to be effective for opinion spam detection. - {Reviewer Expertise}: Past research in highlights that reviewers with acquired expertise on the platform are less prone to cheat.

### 基本功能

• {照片计数}：此度量标准测量审阅者上传的图片数量，并从审阅者配置文件中直接检索。在，作者展示了使用照片计数的有效性，以及其他非语言特征，用于检测假审查。
• {审核计数}：衡量平台上的评论家发布了多少评论。显示垃圾邮件发送者和非垃圾邮件发送者对他们发布的评论数量的不同行为产生了不同的行为。特别是，垃圾邮件发送者通常会发布更多审查，因为他们可能会得到报酬。此功能也被调查了。
• {有用的投票}：最受欢迎的在线评论平台允许用户将评论评为有用的综合评论。可以从审阅者配置文件中检索此信息，或者通过概括审阅者收到的有用投票总额来计算。此功能已被利用，并且已被证明是对意见垃圾邮件检测有效。
• {审阅者专业知识}：过去的研究突出的审查员在平台上获得的专业知识不太容易作弊。

Particularly, Mukherjee et al. in report that opinion spammers are usually not longtime members of a site. Genuine reviewers, however, use their accounts from time to time to post reviews. Although this experimental evidence does not mean that no spammer can be a member of a review platform for a long time, the literature has considered useful to exploit the activity freshness of an account in cheating detection. The Reviewer Expertise has been defined by Zhang et al. in as the number of days a reviewer has been a member of the platform (the original name was Membership Length).

• {Average Gap}: The review gap is the time elapsed between two consecutive reviews. This feature has been previously introduced in the seminal work by Mukherjee et al., under the name {\it Activity Window}, and successfully re-adopted for detecting both colluders (i.e., spammers acting with a coordinated strategy) and singleton reviewers (i.e., reviewers with just isolated behavioral posting) . In the cited work, the Activity window feature as been proved highly discriminant for demarcate spammers and non-spammers. Quoting from, fake reviewers are likely to review in short bursts and are usually not longtime active members.

• {平均差距}：审查缺口是两次连续评论之间经过的时间。此功能以来，Mukherjee 等人在 Omkherjee 等人的名称下，并以{\ IT 活动窗口}的名义，并成功地重新采用，以检测勾结斗争者（即，用协调策略的垃圾邮件发送者）和单例评论者（即，审查员只有孤立的行为帖子）。在引用的工作中，活动窗口特征是被证明是划分垃圾邮件发送者和非垃圾邮件发送者的高度判别。引用，假审查人员可能会在短暂的爆发中审查，通常不是长期活动成员。

On a Yelp dataset where was a priori known the benign and malicious nature of reviewers, work in proved that, by computing the difference of timestamps of the last and first reviews for all the reviewers, a majority (80%) of spammers were bounded by 2 months of activity, whereas the same percentage of non-spammers remain active for at least 10 months. We define the Average Gap feature as the average time, in days, elapsed between two consecutive reviews of the same reviewer and is defined as:

$$AG_i = \frac{1}{N_i - 1} \sum_{j=2}^{N_i} (T_{i,j} - T_{i,j-1})$$

where $AG_i$ is the Average Gap for the $i$ -th user, $N_i$ is the number of reviews written by the user, $T_{i,j}$ is the timestamp of the $j$ -th reviews of the $i$ -th user.

• {Average Rating Deviation}: The rating deviation measures how much a reviewer's rating is far from the average rating of a business. observed that spammers are more prone to deviate from the average rating than genuine reviewers. However, a bad experience may induce a genuine reviewer to deviate from the mean rating. The Average Rating Deviation is defined as follows

$$ARD_i = \frac{1}{N_i} \sum_{j=1}^{N_i} \abs{R_{i,j} - {R_{B(j)}}}$$

where $ARD_i$ is the Average Rating Deviation of the $i$ -th user, $N_i$ is the number of reviews written by the user, $R_{i,j}$ is the rating given by the $i$ -th user to her/his $j$ -th reviews corresponding to the business $B(j)$ , ${R_{B(j)}}$ is the average rating obtained by the business $B(j)$ .

• {First Review}: Spammers are usually paid to write reviews when a new product is placed on the market. This is due to the fact that early reviews have a great impact on consumers' opinions and, in turn, impact the sales, as pointed out by and . We compute the time elapsed between each review of a reviewer and the first review, for the same business. Then, we average the results on all the reviews. Specifically, the First Review value for reviewer $i$ is given by:

$$FRT_i = \frac{1}{N_i} \sum_{j=1}^{N_i} (T_{i,j} - F_{B(j)})$$

where $FR_i$ is the First Review value of the $i$ -th user, $N_i$ is the number of reviews written by the user, $T_{i,j}$ is the time the $i$ -th user wrote the $j$ -th review and $F_B(j)$ is the time the first review of the same business $B(j)$ , corresponding to the one of the $j$
-th review, has been posted.

• {Reviewer Activity}: Several works pointed out that the more active a user on the online platform, the more the user is likely genuine, in terms of contributing with knowledge sharing in a useful way. The usefulness of this feature has been demonstrated several years ago. Since the early 00s, surveys have been conducted on large communities of individuals, trying to understand what drives them to be active and useful on an online social platform, in terms of sharing content. Results showed that people contribute their knowledge when they perceive that it enhances their reputations, when they have the experience to share, and when they are structurally embedded in the network. The Activity feature expresses the number of days a user has been active and it is computed as:
$$A_i = T_{i,L} - T_{i,0}$$
where $A_i$ is the activity (expressed in days) of the $i$ -th user, $T_{i,L}$ is the time of the last review of the $i$ -th user and $T_{i,0}$ is the time of the first review of the $i$ -th user.

$$AG_i = \frac{1}{N_i - 1} \sum_{j=2}^{N_i} (T_{i,j} - T_{i,j-1})$$

• {平均评级偏差}：评级偏差衡量评审者的评级远远距离业务的平均评级。观察到垃圾邮件发送者更容易偏离比真正审稿人的平均额定值。然而，糟糕的经历可能会诱使真正的评论者偏离平均评分。平均评级偏差定义如下：
$$ARD_i = \frac{1}{N_i} \sum_{j=1}^{N_i} \abs{R_{i,j} - {R_{B(j)}}}$$

## Conclusions

User opinions are an important information source, which can help a customer and a vendor to evaluate pros and cons of the buying/selling when they interact. For the importance of opinion role, there is the possibility to have unfair opinions used to promote own products or to disparage products of competitors. The important challenge of detecting unfair opinions has attracted and attracts the scientific community and one of the most promising approaches to address this problem is based on the use of supervised classifiers, which have been proven to be highly effective. In this paper, we tried to further improve their effectiveness, not by proposing some change in the well-tested state-of-the-art algorithms, but only by modifying the input used for the training phase to construct supervised classifiers. Specifically, we considered eight features widely used to detect opinion spam and pre-processed them by considering the cumulative relative frequency distribution. To demonstrate the effectiveness of our proposal, we extracted a data set from Yelp.com and measured the performances of the six most used classifiers in detecting opinion spam, both in their standard use and when our proposal is adopted. The results of this comparison show that the use of the cumulative relative frequency distribution improves the performance of the state-of-the-art classifiers. As future work, we intend to extend our proposal to detect not only individual spammers, but also groups of users who, acting in a coordinated and synchronized way, aim to give credit or discredit a product (or a service). The idea is that, once an ensemble of malicious reviewers is detected, an overlapping between the products that malicious reviewers have evaluated is searched. Groups of users with large overlap (i.e., who revised the same products) could be colluders.