Safeguarding Buyers with Attack-Resilient Reputation Parameters

Agent-mediated reputation systems provide the necessary decision making support to facilitate the automation of processes, and to improve the quality of services in an information asymmetric e-market. Reputation systems encourage honest behavior and discourage malicious behavior of buyer and seller agents by safeguarding honest participants and by laying a foundation for security through social control mechanisms. In this paper, we propose a reputation system that aims to safeguard honest buyers from rogue behavior by counterparties and deceptive buying peers. It employs reputation functions with attack-resilient parameters based on the e-market parameters like value of the transaction and varying experience of agents.These parameters help to reduce the incentive for dishonest behavior, and to minimize harm in case of attacks by rogue sellers in order to improve the trustworthiness among participants in the e-market.


Introduction
The widespread use of Internet is associated with related concerns of trust and reputation, especially for ecommerce transactions involving monetary exchange [31].The use of agent technologies in reputation systems makes e-commerce more user-friendly and relatively secure as it facilitates accurate assessment of buyer and seller behavior.
Reputation has been expressed as "a perception that an agent has of another's intentions and norms" [16] or, "the opinions that nodes in the system have about their peers" [18].Reputation and trust complement each other as an agent expects a positive outcome when interacting with another agent that has a reputation for being trustworthy [7].Trust involves looking forward in time by expecting that another agent will do what it says it will, whereas reputation involves looking backward in time by assessing whether an agent behaved as per expectations.
Reputation systems help buyers/sellers to decide whether to participate in a transaction or not.In the e-market, generally sellers enjoy an information bias as compared to buyers, as buyers have no access to physical goods before the final payment; and, to make buying decisions, buyers rely on the product description by the seller.Hence, it is the buyer who mainly needs protection mechanisms that ensure honest behavior from sellers.On the other hand, a seller can benefit from such a situation by providing a good of lower quality, or simply by not providing any good at all; by frequently changing its identity to disown bad reputation; by colluding with others to maliciously manipulate its own or others reputation; or, by providing unfair information to others.In addition, due to the absence of the requirement of geographical proximity of counterparties in e-market transactions; the difficulty of identifying the actual person behind a transaction; and, a very limited shield of law enforcement due to multiple jurisdictions, prohibitive cost of investigation and prosecution makes it extremely difficult to penalize a cheater in the e-market in a traditional way.In such an environment, reputation systems aim to reduce the risk of engaging with potentially untrustworthy participants by providing a mechanism for establishing trustworthiness between mutually unknown online entities.This encourages honest behavior and discourages rogue behavior from participants.In the absence of reputation systems, there would be no explicit mechanism to distinguish between honest and fraudulent sellers.Due to high uncertainty, fraudulent sellers would flourish by offering cheap products at discounted rates.However, as all participants in the e-market are self-interested with no hesitation to con others, the success of a reputation system depends on its capability to protect honest participants.While existing reputation systems aim to promote honest behavior, their passive approaches are not enough to protect buyers from rogue traders [1].If reputation systems expect fraudulent behavior and employ reputation computing strategies that are resilient to such behavior, then online buying and selling could become much safer.The proposed reputation system is situated in a single cycle, sealed bid reverse auction, distributed e-market environment with self-interested buyers and sellers.The reverse auction involves rigorous work by the buyer that starts with a Request for Quote (RFQ) describing the buyer's requirements.In turn, multiple sellers join the bidding, and a virtual auction is established where sellers bid against one another over the Internet to sell the product [5].In sealed bid auctions, bidders cannot see competitors' offers.On receiving product offers, the buyer agent selects the seller who is offering the product with the highest expected value.Each transaction culminates with buyer and seller agents updating the reputation of each other.
An attack on a reputation system can be defined as a sequence of activities with a desired outcome to exploit vulnerabilities in the reputation design [28].In the e-market, the design of reputation functions must incorporate attack-resilient factors that enable distinction between honest or fraudulent behavior to reward honest and to punish rogue parties.If not identified and penalized, cheaters may not only harm honest members by reducing their trustworthiness but may also lead to failure of the e-market.This paper proposes a reputation computing methodology, which is resilient to common attacks as it is sensitive to parameters like the value of the transaction, the penalty for dishonest behavior, and the mutual experience of a buyer-seller pair.A robust reputation system should also identify the context in which one has earned trustworthiness, for example, as a counterparty, or as an advisor.In the proposed system, in addition to maintaining the reputation of sellers based on its own experience and opinions from other buying peers, a buyer also assigns reputation to opinion providing buying peers based on whether their opinions were fair or unfair.
A number of works from the literature [3], [25], [34] use the concept of penalty and reputation/dis-reputation thresholds, but these are subjective to a specific participant.In contrast, we set the seller's reputation and disreputation thresholds dynamically based on parameters like the buyer's risk taking ability, its past experience, and the value of transaction.It is based on the convention that in a high value transaction and untrustworthy environment, a buyer needs greater protection, therefore the reputation and dis-reputation thresholds should be set to a higher value as compared to a scenario with a low value transaction and a trustworthy environment.The penalty for dishonest behavior in our system has two components: a fixed minimum penalty, and a flexible component that changes with the value of the transaction.It underlines that rogue behavior in a large transaction is more harmful, and must invoke a higher penalty to discourage fraudulent sellers.The proposed reputation system promotes honest behavior and discourages dishonest behavior by computing reputation in a way that reduces the benefit of rogue sellers and also minimizes the impact of many attacks.

Related Work
To alleviate the trustworthiness crisis in online purchase, several approaches have been proposed.A number of online trading platforms comprise of reputation systems to encourage honest transactions between buyers and sellers through feedback provision [6].However, existing reputation systems fall short of dealing with individual and collusive behavior of rogue sellers to increase their own ratings or to reduce others reputation [12].
PeerTrust [20] uses a trust metric that takes into account the satisfaction of the source peer and credibility of the feedback source based on the number of transactions of a peer with other peers.It collects runtime feedback from peers that may be honest or dishonest.But it does not make a clear distinction between feedback from different sources.It computes feedback credibility endogenously, and is prone to errors if majority of feedback is collusive.In contrast, our system survives majority unfair ratings as a buyer filters out dishonest opinions based on their deviation from its own experience.
In Yu and Singh [34], an agent computes trustworthiness of others based on its direct interaction and others testimony.To track deceptions in reputation sharing, it uses the deviation among advisors' opinion to filter out unfair advice.
A probabilistic trust model for handling inaccurate reputation sources [19] computes the trust of an agent using past experience of interacting agents, and for lack of experience, it uses third party information.To discard unfair opinions, it finds the accuracy of the current advice based on past experience and tries to adjust the advice based on its accuracy.But, it assumes that sellers act consistently, which may not always be the case.It also needs to go over an advisor's past advice each time to estimate its accuracy.
Beta Reputation System [9] combines the seller ratings received from advisors using number of fair and unfair ratings.To handle unfair opinions, it uses Iterated Filtering approach to filter out ratings that are not in the majority.Hence, it is effective only when majority of ratings are fair.Some reputation systems [23], [28] use only self experience of members.In Tran [28], sellers adjust the goods' price and quality to maximize profit.Roozmand et al. [23] computes reputation using quality, price and delivery time of goods.These systems suffer from Re-entry and Sybil attacks as their new sellers do not start from least reputation.
Huynh, Jennings and Shabolt [8] compute participants' trust based on direct experience, witness information, role based rules and third party referrals.It uses an inaccuracy tolerance threshold to specify the maximal allowed gap between the actual performance and witness rating and penalizes unfair witnesses accordingly.
Hazard and Singh [6] state that reputation systems must have Monotonicity and Accuracy properties.Our system follows Monotonicity, as selection of a seller and reputation computing are based on attributes like low price and high quality, and a rogue seller is penalized quickly to keep the reputation estimate accurate.[25] provide a comprehensive survey of security issues in reputation systems for peer-to-peer (P2P) networks.They categorize different attacks as network-related and peer-related.In peer-related attacks, it identifies that Whitewashing, Sybil, and collusive attacks occur commonly, and discusses the vulnerability of some reputation systems [32], [36] to these attacks.It also discusses few research issues and challenges that are common to reputation systems in e-commerce like right peer selection and reliable exchange of reputation.[18] analyze the applicability of a number of attacks (like Malicious Spies and Sybil attacks) in trust and reputation systems for distributed environments, and a discussion on the utility to handle these attacks.

Marmol and Perez
Kussul, Kussul and Skakun [24] describe a number of attacks for the security of utility-based reputation systems in grids based on the security scenarios initially defined by Marmol and Perez [10].To tackle attacks, they incorporate a statistical model of user behavior besides several other components like assigning initial reputation to a new participant, capturing coalition between a buyer and a resource, and determining the time decay and score functions.
Koutrouli and Tsalgatidou [9] provide a thorough view of the various threats in distributed reputation systems.They classify attacks and their cooresponding defense mechanisms.They also identify conflicts between different defense mechanisms and desirable characteristics of reputation systems.They also give a roadmap for reputation system designers to design robust reputation systems for P2P applications by identifying areas that require further research.
The reputation system proposed in this paper incorporates a number of attack-resilient reputation parameters in the design of reputation function to enhance its capability to defend against a number of common attacks.It also

Background
With the popularity of e-commerce, the frequency of online fraud is also on the rise.Such an environment requires a proactive approach to safeguard honest participants from fraudsters who disguise themselves as honest parties [36].Reputation systems are oriented to reduce the vulnerability of honest participants by playing an important role in mitigating information asymmetry and transaction risks by encouraging honest and discouraging fraudulent behavior of participants.This Section highlights the role of reputation systems as social behavior control mechanisms and also identifies different unacceptable social behaviors by self-interested participants in the form of various attacks.

Reputation Systems as Soft Security Mechanisms
Hard security and soft security are two different but interdependent approaches to security in information systems.The term hard security is used for techniques like authentication, authorization, integrity and confidentiality that aim to prevent intruders from entering the system, and the term soft security is used for techniques that assure protection against valid but rogue members of an environment based on social behavior control mechanisms like reputation systems [14].Satisfactory detection and removal of vulnerabilities in the e-market require research in both these approaches to security.As shown in Figure 1, these approaches can be visualized as two layers, where trust due to soft security mechanisms like reputation systems cannot exist without the trust due to hard security mechanisms.In environments with autonomous and self-interested participants, ensuring honest behavior is a challenging task.Reputation systems are based on common ethical norms to define a broad boundary of acceptable social behavior.These systems act as deterrent for rogue behavior by discouraging cheaters, and minimizing the impact of attacks.Presence of vulnerabilities in the reputation design may result in increase of attacks.The growth of attack-resilient reputation systems is therefore crucial for the e-market, otherwise cheaters would flourish thus leading to its failure.

Common Attacks on Reputation Systems
In this section, we discuss some common attacks on reputation systems.In Ballot Stuffing (BS) attack, a group of agents collude to rate another agent with abnormally high ratings to boost its reputation, and in Badmouthing (BM) a group of agents collude to rate an agent with low ratings.
In Re-Entry (REN) attack, a low rated agent exits the market and re-enters with a new identity to disown bad reputation.It is assisted by the availability of cheap pseudonyms in the online environment.Reciprocity (REC) is an attack in which two agents mutually rate each other with abnormally high ratings whereas in Retaliation (RET) both the agents rate each other with abnormally low ratings.
In Value-Imbalance (VIM) attack, a rogue seller behaves honestly for small value transactions to gain reputation but cheats in a large value transaction.This attack thrives in systems where the amount of change in reputation for honest or dishonest behavior is not influenced by the price of product/products involved in that transaction.
In Multiple-Identity (MI) or Sybil Attack, a seller opens multiple accounts thereby increasing its chance to sell a good.It continues selling the goods honestly through some and dishonestly through others without facing any major penalty.It exits from a low reputation account and opens another account.
In Oscillations (OC) attack, rogue sellers supply bad products in some scenarios whereas in others they behave honestly.Handling OC is complex, as its resilience depends on behavioral pattern of parties.

The E-Market Model
In the dynamic and potentially adversarial e-market environment, reputation systems seek to generate an accurate assessment of participants' behavior [18].If the reputation design is susceptible to deceitful behavior from rogue participants, it can itself become a point of failure for the e-market due to its inability to accurately separate honest and dishonest participants.A reputation system that quickly adapts to changes in the e-market would be highly useful to protect honest parties from cheaters as it could model their behavior with greater accuracy.
The proposed e-market environment is: open, as agents can join or leave the e-market any time; dynamic, as it undergoes constant changes; uncertain, as the true worth of a good can be judged only after its purchase; un-trusted, as the e-market is populated with honest and dishonest agents; and distributed, as collection, computation and propagation of ratings is done by respective participants.
Major constituents of the e-market are buyer agents, seller agents, seller agent's comprehension of buyer's requirements, ability of an agent to track others' behavior, and the buyer-seller interaction cycle.To compute reputation we use Reinforcement Learning (RL) which is a machine learning technique that deals with what an agent should do in each state and how to map situations to action for maximizing long term rewards.RL involves mapping situations to actions, so as to maximize a scalar reward signal.In RL, the learner must discover the actions that yield the most reward by trying them.At times, actions may affect not only the immediate reward, but also the next situation and all subsequent rewards.The trial-and error search and delayed reward are the most important distinctive features of RL.Few well-known techniques for solving the RL problem are Dynamic Programming (DP), Monte Carlo (MC), and Temporal-Difference (TD) learning [33].The proposed reputation system uses TD learning as it is a combination of MC and DP and is one of the vital and novel approaches of RL.Like MC, TD methods can learn directly from experience without a model of the environment.However, unlike MC, in which one must wait until the end of an episode to update the value function, TD algorithms only need to wait until the next time step.
Different processes belonging to a transaction in the e-market like computing and sharing of sellers' reputation and seller selection [35] for purchasing a product are sensitive to dynamic e-market parameters like the mutual experience of buyer/seller agents and value of the transaction.To utilize buyer's improved knowledge of a seller's behavior with each successive mutual transaction, the relative weight of seller's reputation based on self-experience of a buyer increases as compared to the advice by other buyer agents.Incorporating value of the transaction in the reputation function follows the principle that honest behavior in a big transaction is more significant than in a small transaction.In a robust reputation system, the reputation of a rogue participant should drop quickly, while it should be hard for a participant to boost its reputation in a short period of time [22].Hence, the penalty for fraudulent behavior in the proposed reputation system is kept more than reward to ensure that in case of dishonest behavior by a seller, its reputation drops swiftly.Figure 2 shows the conceptual architecture of this environment.
In the e-market, agents automate the following e-commerce processes: search, advise, compare and select products and sellers; purchase products on behalf of actual buyers; track the behavior of other participants; filter out unfair information; update reputation of their counterparties based on self experience and others opinions; and update their peers' reputation based on their behavior during a transaction.In this paper, we focus on how the robustness of the reputation system is enhanced by using attack-resilient reputation parameters.Vibha

Reputation Computing Methodology
The methodology for computing reputation establishes the basic traits of a reputation system and may strengthen or weaken its resilience against various attacks.In this section, we present an attack-resilient reputation computing methodology to promote honest and to discourage dishonest behavior.In the proposed environment, important milestones in a transaction comprise of buyer's fuzzy product requirement specification, soliciting offers from sellers, seller selection, updating individual reputation, soliciting opinions from peers, computing shared reputation, and combining individual and shared reputation to form the overall reputation of a seller.In the proposed reputation system, Individual Reputation (IR) reflects a seller's trustworthiness based on self-experience of the buyer, and Shared Reputation (SR) reflects the opinion of other buyers about the reputation of a seller.The sequence diagram in Figure 3 represents ordering of processes in a transaction., where .For sellers that are new to buyer b, , and a seller remains in this list until its reputation moves above dis-reputation threshold .Before crossing , if a seller cheats then it is termed as dis-reputed.The seller profiling helps a buyer to decide with whom to interact.Reputed sellers enjoy high trustworthiness from the buyer, dis-reputed sellers are blacklisted, and non-reputed sellers are neither most preferred nor blacklisted.Similar to sellers, buyers acting as advisors are also categorized as reputed, nonreputed, dis-reputed and new.
Handling e-commerce transactions is sometimes difficult as the buyer's assessments of goods' attributes are often subjective, fuzzy, conflicting and non-commensurable in nature.The proposed system allows buyers to purchase a product by specifying their product requirements in terms of importance of different attributes in linguistic terms like Highly Important or Moderately Important.To select a seller, it uses a seller selection methodology [35] in which the buyer agent first computes fuzzy weights of different attributes of a good as per buyer's requirement and past experience.It solicits product offers from different sellers by forwarding buyer's fuzzy product request to existing sellers.On receiving goods' offers from different sellers, the buyer agent computes the expected value of the good by multiplying the fuzzy matrix containing sellers' product offers with fuzzy weights of different attributes, and purchases the good from a seller that offered the product with the highest expected value.After the product is delivered to the buyer, it solicits buyer's feedback for different product attributes, and computes the actual value of the good by using the seller selection methodology.If the actual value of the good is computed to be greater than its expected value reflecting that the buyer is satisfied with the product, the buyer agent enhances the reputation of the seller, otherwise it reduces the reputation of the seller as discussed below.

Computing Individual Reputation (IR)
Individual reputation of the seller is based on the buyer's experience during a transaction.If the buyer b B is satisfied with the transaction, using reinforcement learning, it enhances the i.e.IR of seller s at time t+1 as: Here, μ represents effective reputation value increase factor, and η represents relative value factor to ensure that increases monotonically with value of the transaction, and represents the overall reputation of seller s for buyer b at time t.In addition, is the value of the transaction, and η maps the value of to the range [0,1].In case of a single good being purchased, equals the price p of the good g.Also, λ is a constant in the range [0,1] that is used to moderate the sharp change in the value of the exponential function, and e is an exponential constant.In Equation (2), is the initial reputation discount factor with a fixed assumed value of 1, ε is mutual trustworthiness discount rate with an initial value 0 and increments by a small fraction with each successive transaction between a buyer-seller pair, and is the number of past transactions between that buyer-seller pair.It ensures that with increase in number of transactions between a buyer-seller pair, IR increases at a relatively smaller rate to reduce the impact of any collusion between the buyer and seller to inflate each other's reputation.If the environment is highly uncertain, can also be set to a value higher more than 1.It can be observed from Equation (1) that i.e.IR at time t+1 is based on i.e. overall reputation at time t to underline that the overall reputation denoting the trustworthiness of seller s for buyer b at the end of last transaction forms a part of the self experience of buyer b in the next transaction.As for large values of , may quickly approach 1, before computing η, we first scale down to a value within a reasonable range as: where, [ ] is the original and [ ] is the new range for .In rest of the paper, when we imply value of the transaction using , it means scaled value of using Equation (4).
If the buyer b is not satisfied with purchased good g, using reinforcement learning, buyer b updates the IR of the seller s at time t+1 as: -. ( 5) where, .
where, ξ is the effective reputation value decrease factor for dishonest behavior of a seller.Like μ, ξ also depends on value of the transaction and the number of past transactions between a buyer-seller pair.Hence, there is swift drop in reputation for a big transaction than a small transaction, and this drop is discounted with increase in the number of past transactions between a buyer-seller pair.In Equation (7), is the Penalty Factor, reflects the minimum

Computing Shared Reputation (SR)
In a distributed environment, while sharing sellers' reputation, the requesting buyer may receive unfair ratings from its peers.To handle this problem, we propose a reputation sharing methodology with two major constituents: filtering unfair advice, and allocating reputation to advisors based on their behavior.In the e-market, as buyers may not always be willing to share their opinion, so there should be an incentive for those advisors who always provide opinion on request and are also honest.Our strategy gives incentive to honest advisors from two angles.First, it assigns reputation to advisors exclusively for participating in the reputation sharing process.Second, it enhances the weight of each successive fair opinion by an advisor as a reward for showing honesty in a self-interested environment.The proposed methodology filters other buyers' advice by using the concept of "moment about an arbitrary point" [30] compares the set of received opinions about the target seller's reputation with the IR of the target seller for the opinion seeking buyer.The second moment is widely used to measure the width of a set of points in one dimension, or in higher dimensions measures the shape of a cloud of points [21].Instead of finding the deviation from the set of received opinions, this methodology computes the deviation of shared opinions from the seller's IR reflecting the buyer's self experience of the seller.Filtering out unfair opinions based on their deviation from the requesting buyer's self experience, if any, helps to filter out unfair opinions even if majority of opinions are collusive in nature.
Let b be the buyer soliciting the reputation of seller s from its buying peers as advisors at time t+1 and accepts their opinions denoting the reputation of seller s from advisors for i =1,2,...,n.It discards any opinions from dis-reputed advisors.To compute the maximum permitted deviation of received opinions from its own past experience of the seller's reputation, buyer b computes the deviation of the set of received opinions i.e. from the IR i.e.
of seller s as: If the buyer and seller are interacting for the first time, it finds the deviation of the advisors' opinions from their mean as: (9) where, (10) Denote opinions where as fair/honest, and others as unfair/dishonest.Filter out unfair opinions from the set of received opinions.
From the set of honest opinions, buyer b computes the weight of each honest opinion by utilizing advisors' past behavior based on parameters like the number and the percentage of honest past transactions between the buyer seller pair.Finally, the buyer computes the aggregated shared reputation of the seller s as: (11) The buyer b also updates the reputation of each honest/fair advisor b i., as: where, .
Here, is the advisors' reputation increase factor and is value of the transaction.It makes the increase in advisors' reputation monotonically relative to transaction value.But, if the advisor's opinion is dishonest, buyer b reduces its reputation as:

Computing Overall Reputation (OR)
After computing the individual and shared reputation, buyer b combines these to compute the overall reputation of the seller s as: Note that is the individual reputation of seller s as computed by buyer b, and is the aggregated shared reputation component.Also, α is the experience gain factor and .The initial value of α before the first transaction between a buyer-seller pair is 0 and with each successive transaction, it is incremented by a small factor, say 0.01, to ensure that with increase in number of mutual transactions between a buyer-seller pair, the relative weight of IR increases and that of SR decreases.After large number of mutual transactions, when α approaches 1, depends only on as the weight of approaches 0. Initially, when a buyer has no experience of a seller, it depends largely on others' opinions at the cost of incurring communication overhead, however once a buyer gains enough experience of a seller, it can avoid the overhead of inter-agent communication.Further, if a seller is new to buyer b, its overall reputation is computed as: .
Also, if a seller is new in the e-market then, .Finally, update the sets of reputed, non-reputed, dis-reputed and new sellers.As our strategy is sensitive to varying parameters of e-market, it reduces the incentive of rogue sellers that try to benefit from vulnerabilities in the reputation function.

Use of Attack-Resilient Reputation Parameters
In the distrusted e-market environment, usually, the strength of a reputation system is based on its potential to endure attacks by rogue members.Hence, reputation functions must include some defense enhancing parameters to ensure that honest parties are rewarded with higher reputation and improved economic gains and cheaters are penalized with economic loss.In addition to protect from rogue counterparties, a reputation system must also be resilient to attacks involving dissemination of unfair information among agents, and it must also encourage advisors to share fair information.Two major rating misbehaviors during opinion gathering process are Individual Unfair Ratings, and Collaborative Unfair Ratings [2].Compared with collaborative unfair ratings, individual unfair ratings usually cause less damage, hence we focus on filtering out collaborative unfair ratings.Next, we analyze the contribution of parameters in the reputation functions of the proposed reputation system to enhance its attackresilience.

Dynamically Setting Reputation and Dis-Reputation Thresholds
As defined above in section 5, each buyer maintains four seller profiles with the help of two thresholds: buyer b's reputation threshold above which all sellers are termed as reputed, and dis-reputation threshold below which all sellers are termed as dis-reputed.To safeguard buyers, we propose a strategy to compute the reputation and the dis-reputation thresholds based on following three factors.
Risk taking capability of the buyer: One of the factors to set these thresholds is the buyers' risk taking capability.We denote the buyer specific subjective constants and as base values of the reputation and the dis-reputation thresholds for sellers that are set based on buyers' risk taking ability.Consequently, a risk averse buyer would choose a higher value, and a risk tolerant buyer interested in exploring the market would choose a lower value for these thresholds.
Trustworthiness of e-market: The less trustworthy the e-market is, the higher is the value reputation/disreputation threshold should be set to.Trustworthiness of e-market for a buyer is computed based on its experience in past transactions.
Value of the Transaction: As fraudulent sellers get greater benefit by cheating in a high value transaction, a buyer needs extra protection.The value of a transaction must be a vital factor in setting the reputation and the dis-reputation thresholds, as the buyer would be more risk averse for a big transaction, and would like to set these thresholds to be higher as compared to their value in a low value transaction.
Based on above discussion, the reputation threshold of buyer b is computed as: .
Similarly, the dis-reputation threshold of buyer b is computed as: Here, is the number of transactions of buyer b in the recent past, is the number of honest transactions out of last transactions, is the experience of buyer b in last transactions as a ratio of the number of dishonest transactions to the total number of transactions, λ is a constant between 0 to 1, is the value of a transaction, and e is an exponential constant.If a buyer has no experience of a seller, has a default value of 1 to initially set these thresholds with a relatively strict criteria.As buyer gains experience of a seller, is computed based on its actual experience.Also, as these thresholds change monotonically with value of the transaction, their values are higher for big transactions as compared to small ones.
As sellers with reputation below dis-reputation threshold are blacklisted, a buyer should carefully set the value of .To avoid majority of sellers falling below the dis-reputation threshold without any concrete evidence of sellers being successively dishonest, should be set to a suitably low value.

Dealing with New and Non-Reputed Sellers
In addition to reputed sellers, a buyer should also give a chance to new and non-reputed sellers.Initially, when the buyer itself is new, it must deal with new sellers.As the buyer gains experience of various sellers, it uses an already discovered set of reputed sellers and makes less effort to find new sellers.In an e-market with highly unpredictable seller behavior and greater frequency of new sellers joining the e-market, the tendency to give chance to a new seller should be high.But if most of transactions are honest, or if new sellers seldom join the e-market, the rate of discovery of new sellers may be rather low.
In the proposed system, a buyer uses the following strategy to find new sellers.From the total set of offers by sellers, if the number of reputed sellers' offers is above a certain threshold , and out of the total set of offers, if the fraction of offers by reputed sellers is above a well known threshold where , then the buyer chooses only from reputed sellers.In case the fraction of offers by reputed sellers is on or above sufficiently known threshold but less than for , then the buyer chooses from reputed and new sellers.Otherwise, the buyer chooses from reputed, non-reputed and new sellers for the product purchase.It implies that for seller selection, new sellers are given slightly higher preference than non-reputed sellers.This is based on the following reasons: to ensure fair competition, new sellers must be given a chance, and non-reputed sellers may comprise of sellers who behaved dis-honestly in the past leading to fall of their reputation below reputation threshold.

Choosing the Appropriate Penalty
Here, we analyze the utility of computing a lower bound on penalty to ward off simple Oscillations (OC) attack by a rogue seller.Consider a situation, where seller s was dishonest in the last transaction with buyer b that resulted in reduction of its reputation.In the next transaction with buyer b, seller s behaved honestly in order to enhance its reputation to the same value where it was before the last transaction.Let at time , the overall reputation of s for b is a result of b's self experience and opinions from others.Assuming that after the transaction between b and s at time , the actual value of the product received from s is found to be less than its expected value.Based on Equation ( 5) and Equation ( 6), it resulted in buyer b decrementing the Individual Reputation (IR) of seller s as: At time , let is the number of past transactions between buyer b and seller s.To regain lost reputation, s behaves honestly in the next transaction with b at time such that b enhances the IR of s.Using Equation (1) and Equation (2), updating of IR of seller s by buyer b after the transaction at time is represented as: To keep things simple, we assume that, either reflecting the Shared Reputation (SR) is same as IR i.e., or, the weight of is negligible.Using Equation ( 16), the overall reputation becomes equal to IR i.e.
. Similarly, .At time , seller s regains its past reputation by behaving honesty if, .
Besides penalty , is the main factor responsible for reputation loss and, .
Let , where c is a constant in the range (0,1].Now, to earn back lost reputation, the seller must ensure, .Hence, by using Equation ( 23), .
By keeping penalty sufficiently higher than the threshold described in Equation ( 25), buyer b can negate the impact of simple oscillatory behavior, as the reputation loss for a dishonest transaction would always be higher than the gain for an honest transaction.Hence, if a seller with oscillatory behavior wants to sustain as a reputed counterparty for a buyer, it must behave honestly in large value transactions resulting into reduced incentive for the rogue seller.In the proposed system, we can utilize the above analysis for setting the minimum penalty in Equation (7).Further, in Equation ( 7), the overall penalty is set dynamically as it also depends on value of the transaction to ensure higher penalty for rogue behavior in a large value transaction as compared to a small value transaction.

Handling Re-Entry/Sybil Attacks
In a transaction, reputation is assigned to a buyer or seller against an identity.To enhance and maintain its trustworthiness, a seller has to sacrifice short term gains by incurring two types of costs: Reputation Building Cost ( ), and Reputation Maintenance Cost ( ). comprises of the initial loss of profit that an honest seller has to incur to gain reputation, and comprises of sacrificed short term profits by not indulging in rogue behavior.To build reputation, a seller must be ready to work with minimum profit to establish itself.
In the e-market, if sellers change their identity by Re-Entry (REN) or Sybil attacks, they can repeatedly sell bad products and still earn profits.Frequent identity change raises of honest sellers thus making their survival difficult.Inspired from Zhou et al. [23], we define Net Gain to dishonest seller ( ) based on and as: (26) In Equation (26), n represents the number of buyers with whom the seller interacts.It can be observed that will only be incurred if a seller delivers high quality products in the initial interaction period with each buyer.However, a rogue seller will benefit by frequently changing its identity, if is sufficiently higher than 0. Hence, to discourage a rogue seller, should be minimized.In systems [25], [29], [33] where new sellers are assigned initial reputation more than the minimum possible reputation approaches 0, and gets equal to of an honest seller.It enhances the incentive of sellers to behave dishonestly by frequently changing their identity as shown in Figure 4.
In reputation systems, ensuring high can be a viable method to reduce the incentive for change in identity.The enhanced would reduce the due to REN as well as Sybil attack.However, on the flip side, increased would also apply to all honest new sellers.We handle this problem by giving a fair chance to new sellers as explained in section 5.4.2.In our system, is relatively high as a new seller begins from the lowest possible reputation of 0, and has to strictly behave honestly to cross dis-reputation and reputation thresholds before it can enjoy the benefits of being reputed.If a new seller behaves dishonestly even once before moving above the disreputation threshold, it is designated as a dis-reputed seller for that buyer.The minimized in the proposed reputation system results in reduced incentive for a seller that changes its identity frequently.

Resolving Value Imbalance (VIM) Attack
The relative value factor η defined in Equation ( 3) ensures that the IR component increases monotonically with the value of a transaction.It counters VIM and OC attacks, where a seller acts honestly in small value transactions but turns rouge for a high value transaction.In addition, the relative value factor η also plays a key role in deciding the amount of decrease in IR component of a seller for dishonest behavior in a transaction.

Handling Self Promoting (SP) Attack
The term in Equation ( 2) ensures that with increase in number of transactions between a buyer-seller pair, the increase in individual reputation is discounted.Where, represents the initial reputation discount factor, ε is mutual trustworthiness discount rate with initial value 0 and a small incremental value with each successive transaction between a buyer seller pair, and is the number of past transactions between a buyer and seller.It handles SP attack where a buyer and seller collude by indulging in large number of mutual transactions in order to raise the seller's reputation.

Design and Implementation
We designed an agent-mediated reputation system built up on top of the agent environment provided by Java Agent Development Framework (JADE).The proposed reputation system is composed of three layers: web interaction layer, object-agent communication layer, and agent environment layer implemented in JADE. Figure 5 shows the overall layered design of proposed multi-agent system.Once the buyer provided its feedback, the buyer agent computed the individual reputation of the seller.Concurrently, the buyer agent b 1 also solicited the advice from other buying peers regarding the reputation of the target seller agent, by sending the REQUEST ACL message to other buyers as shown in Figure 8.The reputation of the seller s 6 was updated by b 1 in the database, and an ACL INFORM message was handed over to the gateway agent that passed the message to the buyer servlet and finally to the buyer to reflect the completion of the transaction.

Experimental Study
An experimental study was conducted to underline the attack-resilient capability of the proposed reputation system in the e-market.To show the ability of our system, that in this section we refer to as Agent-Mediated Attack-Resilient Reputation (AMARR) system to counter various common attacks on reputation systems, we simulated a number of attack scenarios.
In a particular scenario, after 10 mutual past transactions, seller s 2 launched simple oscillations attack on buyer b 1 in which s 2 behaved honestly and dishonestly in alternative transactions.We compared our AMARR system with Tran and Roozmand systems [25], [33], [34] that use fixed and subjective penalty factors, and are not sensitive to value of the transaction.We have chosen these systems from literature [25], [33], [34] for comparison as these systems are set in the similar e-market environment as that of the AMARR system as these systems are designed for reverse auction scenario in the distributed e-market environment full of self-interested participants, and these systems also utilize concepts like reputation/dis-reputation thresholds for prefiltering and reward/penalty for honest/dishonest behavior.As shown in Figure 11, we first compared by simulating oscillations attack for two different products, namely laptop and pen drive.We observed that oscillations attack was fully neutralized, as the loss of reputation of seller s 2 due to alternative dishonest transactions was amply higher than the reputation gain due to its honest behavior in alternate transactions.For a small value product like pen drive, our AMARR system was a little better than the other two systems.But for a high value product like laptop, the AMARR system performed substantially better than others, as the penalty as well as reputation in our system are sensitive to value of the transaction.
In a scenario reflecting Value Imbalance (VIM) attack, buyer b 1 and seller s 4 were involved in repeated mutual transactions for buying laptop and pen drive.For dishonest transactions, penalty in Tran [33] and Roozmand et al. [25] systems and base penalty in our AMARR system was kept at 1.8.During VIM attack, seller s 4 behaved honestly in a number of small value transactions for buying pen drive but dishonestly in case of a large value transaction for buying laptop.Due to intermittent fraudulent behavior, our system reflected an ample drop in Individual Reputation (IR) of s 4 as change in reputation was monotonically proportional to value of the transaction.It helped in neutralizing any incentive of such rogue behavior thus resolving the VIM problem as shown in Figure 12. Figure 12 also illustrates a sophisticated Oscillations attack where the seller behaved dishonestly only in case of relatively high value transactions to make the detection of its rogue behavior rather difficult.Tran [33] and Roozmand et al. [25] models performed poorly, but our AMARR system was able to defy this attack.We simulated a scenario for Re-Entry (REN) attack where seller s 5 exits from the e-market to disown bad reputation and re-enters with a fresh identity.The REN attack was launched using our AMARR system with a reputation range of [0,1] as shown in Figure 13, where s 5 worked for 39 transactions to regain its reputation of 0.224 with its previous identity as s 5 .Therefore, it was observed that launching REN attack by frequent change in identity was beneficial for s 5 based on the AMARR system.Similarly, s 6 with reputation -0.07 launched REN using Tran [33], and s 7 with reputation -0.09 launched REN using Roozmand et al. [25].Figure 13 reflects that in Tran's and Roozmand's strategies the seller actually gained reputation by launching REN, as in both these systems -the range of reputation is [-1,1], and a new seller started with a reputation of zero i.e. from the middle of the reputation range.
In a Ballot Stuffing (BS) attack, a number of transactions between buyer b 1 and seller s 6 were observed where BS was launched on b 1 after 1, 20, 30, 50, 75 and 90 transactions with an α incremental rate of 0.01. Figure 14 reflects In another situation reflecting a special case of Ballot Stuffing/Bad Mouthing attack in which majority of buying peers shared unfair opinions, buyer b 1 solicited opinions about the reputation of seller s 6 from other buying peers.As shown in Figure 15, buying peers were divided into three categories: honest, alternatively honest/dishonest, and dishonest.Among the nine buying peers, two honest buying peers always provided fair opinion, two alternatively honest/dishonest peers shared fair/unfair opinion alternatively, and the rest always shared unfair opinion.For filtering out unfair opinions we compared two strategies: Majority Rating scheme used in the literature that employs endogenous filtering of unfair information [15], [17], [19] and the Shared Reputation Methodology of our AMARR system that filters out unfair opinions based on their deviation from individual/self experience.We ran the simulation for a number of transactions between b 1 and s 6 , though in Figure 15 we have shown the outcome of only first 10 transactions as afterwards the reputation of some buying peers moved below the reputation threshold.It was observed that by following the Majority Rating scheme based on endogenous filtering of information, reputation of two honest buying peers suffered the maximum as due to the presence of a large number of unfair advisors, the mean of advisors ratings from which this method computed deviation was also biased towards unfair advisors.As a result, honest buying peers were filtered out, and peers who shared unfair information gained reputation.On the contrary, it is shown in Figure 15 that by following the Shared Reputation Methodology of our AMARR system (that filtered out unfair opinions based on their deviation from the self experience of the target seller), except for first few transactions, honest opinion providers got the maximum incentive in terms of their reputation as advisors being enhanced consistently with each transaction.Similarly, the dishonest advisors were penalized the most as their reputation decreased consistently, and finally moved below the reputation threshold of advisors.

Discussion
As e-commerce involves the exchange of money, the risk involved in distributed e-commerce transactions is much higher than other applications like P2P file sharing [9].Further, with the increase in e-commerce transactions, the frequency of online fraud is also on the rise.In such an environment, keeping caution is not sufficient and participants of online transactions need a more proactive approach for their protection from fraudsters who often disguise themselves as honest potential partners.The risk involved in e-market transactions can be reduced by integrating attack-resilient features in the reputation design to enhance the trustworthiness of individual users as well as the whole reputation system.This paper presented a novel approach to develop an attack-resilient reputation system in the distributed e-market environment full of self-interested buyers/sellers.The proposed reputation system proactively expects fraudulent behavior from different participants, especially sellers, and has improved protection for honest buyers as the proposed reputation computing methodology makes it unattractive for rogue sellers to indulge in such attacks.
Handling Re-Entry (REN) and Sybil attacks is actually a problem of identity management.Reputation systems try to handle these attacks by using mechanisms that are not based on their detection but by making it unattractive for the fraudulent participant to indulge in these attacks.The reputation system proposed in this paper is thus based on the approach of reducing the incentive to acquire new identities frequently, and it discourages REN attack for the following reasons.First, to re-enter the e-market with a new identity, an agent must begin from minimum reputation that means an enhanced and thus reduced i.e.net gain to the rogue seller which results into minimized benefit for REN attack.Second, it sets the reputation/dis-reputation thresholds dynamically to ensure that for a big transaction and less trustworthy e-market, these thresholds are set to higher values thus needing an improved effort from a new seller to become reputed.Third, even after re-entering the e-market with a new identity, a seller has to show persistent honest behavior to ensure that its reputation rises above the dis-reputation threshold safely as showing dishonest behavior even once below dis-reputation threshold would result in the new seller being designated as a dis-honest seller and would never be considered again for business by that buyer.In the literature [3], [25], [29], [34], new sellers are assigned reputation that is more than the allowed minimum, which minimizes and thus enhances the incentive of sellers to launch REN attack.To reduce REN, some online auction platforms [26] charge an entry fee from new sellers, and Kerr and Cohen [27] handle REN with partial success.
The proposed system uses the value of transaction as one of the fundamental factors for computing the reputation of a participant.It helps to resolve the Value Imbalance (VIM) attack as the extent of change in reputation is monotonically related to value of the transaction.It makes the reward and penalty for honest/dishonest behavior of sellers relative to the size of a transaction to make VIM attack in-effective.PeerTrust [20] and Trunits [27] recognize this factor for computing trust.
In our system, the impact of Reciprocity (REC) or Retaliation (RET) is limited by the value of a transaction, and the discounted change in reputation with each new successive transaction between a collusive buyer-seller pair.Commercial models like eBay have a strong presence of this attack as approximately 98% of the eBay ratings are positive due to the apprehension of RET attack.
The impact of Self Promoting (SP) attack is reduced in the proposed reputation system as by setting ε i.e. mutual trustworthiness discount rate, reputation earned by a seller due to repeated transactions with the same buyer is discounted with each successive mutual transaction.To the best of our knowledge, only Trunits [27] comprehend the significance of dealing with this problem.
Our system minimizes the impact of Sybil attack as reduced incentive for REN due to high also discourages a rogue seller from acquiring multiple identities.With each new identity, a seller must work honestly to become reputed with an apprehension of being designated as a dishonest seller if its behavior is found unsatisfactory.Other systems [3], [25], [34] in which new sellers' initial reputation is more than the minimum reputation, have negligible resulting into high incentive for sellers to launch Sybil attack.
Our reputation sharing strategy reduces the impact of Ballot Stuffing (BS) and Bad Mouthing (BM) with each successive transaction between a buyer-seller pair as weight of shared reputation decreases, and goes negligible when the buyer acquires enough experience of the seller.This strategy is sensitive to dynamic parameters of the emarket, and encourages an honest advisor by enhancing its reputation and the weight of its opinion with each successive honest advice.It filters out unfair ratings based on the deviation of advisors' opinions from the individual experience of the source buyer that enables the buyer to handle BS/BM even if majority of advisors are collusive.Existing systems [1], [3], [15] fail to handle these attacks if majority of raters are unfair.
The proposed system can handle Malicious Spies (MS) attack as participants are assigned reputation exclusively as advisors/information providers which is separate from their reputation as customers/traders.Systems in the literature [3], [15], [16] that do not differentiate between a participant's behavior as buyer/seller or as an advisor suffer from MS.By cautiously setting the minimum fixed penalty, and using a dynamic overall penalty that is sensitive to value of the transaction, the proposed reputation system minimizes the impact of Oscillations (OC) attack by a seller.It is based on the fact that to regain lost reputation, a seller has to behave honestly in large value and large number of transactions.This reduces the seller's incentive to launch OC attack.A summary of the built-in attack-resilience capablility of the proposed reputation system when compared to other systems from the literature is presented in Table 1.VIM is resolved as the amount of change in reputation is monotonically related to the value of a transaction.

Reciprocity (REC) and Retaliation (RET)
Its impact is minimized as reputation gain of a seller due to repeated transactions with a buyer is discounted with each new transaction.REC/RET is also limited by the value of transaction.
Commercial systems like eBay have a strong impact as 98% of eBay ratings are positive to avoid RET.

Re-entry (REN)
REN attack is partially resolved as high RBC results into low incentive for a rogue seller as the is reduced.
e-Bay and Trunits deal with this problem with partial success.

Multiple-Identity (MI) /Sybil
The benefit of minimized incentive of launching REN due to high RBC and reduced also spills to scenarios, where a participant tries to manipulate ratings through multiple identities.

Reputation Lag (RLG)
The impact of RLG reduces as cheaters' reputation drops sharply due to dynamic penalty and dynamic reputation/dis-reputation thresholds.If a buyer gains enough experience of seller, it need not compute SR leading to quick computing of OR.

Ballot Stuffing (BS), or Bad Mouthing (BM)
The effect of BS/BM reduces with each successive transaction amid a buyer-seller pair as the weight of SR reduces and turns negligible when a buyer gains sufficient experience of the seller.

Malicious Spies (MS)
MS impact is minimized as separate reputation is assigned to a user as an opinion provider, or as buyer/seller.Very few systems (Regan & Cohen, 2005) provide reputation to advisors.

Self Promoting (SP)
It fairly handles SP attack as reputation earned by a seller for repeated transactions with a buyer is discounted with each successive mutual transaction.

Oscillations (OC)
It minimizes the impact of OC by setting the penalty factor cautiously and, due to the sensitivity of the reputation function to value of transaction.
Systems having penalty greater than the reward [25], [33] give some solution.

Orchestrated (ORC)
Partial solution to a subset of attacks is possible as dealing multiple attacks with actors changing roles is very difficult.
No known solution for this type of multifaceted attacks.
The proposed system also minimizes the harm due to Reputation-Lag (RLG) attack as the IR as well as OR of rogue sellers reduce swiftly due to dynamic setting of both the penalty factor, and reputation/dis-reputation thresholds.Hence, a seller's deceitful behavior is rapidly detected due to a sudden drop in its OR.Further, when a buyer acquires sufficient experience of a seller, this system does not require the need to compute SR resulting into a reduced time lag in computing OR.Systems from literature [25], [27], [33], [34] that are based only on IR consisting of only direct evidence are not generally affected by RLG attack as there is no sharing of reputation among participants and therefore time gap between deceitful behavior and reduced reputation is less.Suitable setting of seller's reputation and dis-reputation thresholds in our system helps in accurate seller profiling.We set these thresholds using factors like buyer's risk taking ability, trustworthiness of e-market, and value of the transaction.Existing systems [3], [25], [33] use sellers' reputation/dis-reputation thresholds that are purely subjective for each buyer.
The proposed system would also be able to handle a combination of the subset of attacks to which the proposed system is resilient.For example, in an environment with a combination of Sybil and SP attacks, the proposed system would be able to handle as the proposed reputation sharing methdology elaborated in section 5.2 would be able to handle unfair reputaion sharing, and with each successive transaction between a buyer-seller pair, the discounted change in reputation while computing its IR reduces the impact of SP attack.However, at times the combination of attacks launched at a given time could be quite complex with a large and varying number of participants involved in the attack, and is known as Orchasterated (ORC) attacks in literature.In case of an ORC attack that involves launching of a number of attacks in parallel, the proposed system only helps to reduce its impact by handling a subset of these attacks, as handling multiple attacks with actors changing roles at different points of time is very difficult.
In addition to buyers, honest sellers must also be safeguarded in the e-market.A major risk of online sellers is that they may not get paid after delivering the product [11].In our system, a seller delivers a product only after receiving the payment, hence the seller is protected from the payment concerns.But, full attack-resilience in an online environment cannot be realized just due to a sound reputation function as it requires an integrated approach involving identity management, authentication, and non-repudiation in the system [13].Therefore, a good reputation system should encourage honest behavior, discourage rouge behavior, and allow protection of participants from various attacks.The proposed reputation system incorporates built-in defense capability in the reputation computing methodology by increasing its resilience to various attacks, and by discouraging fraudulent behavior by slapping a high and dynamically computed penalty on dishonest sellers as compared to the corresponding reward for an honest behavior.The proposed reputation system can be further extended by applying it to different type of products.The reputation sharing strategy can be made more robust by exploring new strategies to filter out unfair opinions.In addition, investigating more complex interactions in the e-market, for example, exploring the e-market by allowing transactions to complete in more than one cycle with buyer-seller negotiation could be an interesting extension.

Conclusions
In this paper, we underlined the importance of attack-resilient parameters in a distributed reputation system for the emarket.In the e-market with self-interested participants, the proposed reputation system models the trustworthiness of participants using reputation functions that encourage honest behavior, where at the same time it ensures that the sellers who cheat the most are penalized the most.
The attack-resilient parameters in the proposed reputation system are sensitive to the changing parameters of emarket like the experience of agents and the value of a transaction.One of the main contributions of the proposed system is to compute the reputation of the sellers in such a way that the benefits of rogue sellers are minimized thus making it unattractive to indulge in fraudulent behavior.This is achieved by dynamically setting the reputation and dis-reputation thresholds for sellers, by introducing effective reputation value increase/decrease factor, by dynamically setting the appropriate penalty factor, and by reducing the incentive for launching Re-Entry, Oscillations, Value-Imbalance and collusion based attacks.In the proposed reputation system, the increase in transactional experience leads to increased weight of individual reputation, and honesty in a large value transaction leads to a higher increase in reputation as compared to a small value transaction.Fraudulent sellers are penalized with relatively large and swift drop of reputation resulting into quick detection of their fraudulent behavior and reduced future business opportunities.Our system also assigns reputation to advisors to safeguard buyers not only from rogue sellers but also from deceptive advisors.

Figure 1 :
Figure 1: Layered architecture of hard and soft security

Figure 3 :
Figure 3: Interaction sequence of agents in an e-market transaction In our system, B is the set of buyers, S is the set of sellers, and G is the set of goods.Let [0,1] represent IR of seller s for buyer b at time t, [0,1] represents SR of seller s for buyer b at time t, and [0,1] represents overall reputation of seller s at time t for buyer b.Each buyer maintains four seller profiles: reputed, nonreputed, dis-reputed and new sellers.A seller is reputed if , where is buyer b's reputation threshold

Figure 4 :
Figure 4: Impact of change in identity if initial reputation is more than minimum reputation

Figure 5 :
Figure 5: Layered design of distributed e-marketThe Web layer deals with actual users and acts as a user interface.The user interaction from the web is captured by the servlet to perform actions like creating a gateway agent for sending a product request to the buyer/seller agent.The programs that create gateway agents, use JadeGateway and GatewayAgent classes of JADE to connect agent

Figure 8 :
Figure 8: Inter-buyer communication sharing seller's reputation After receiving buying peers' opinions in the form of PROPOSE ACL messages, the buyer agent b 1 filters out dishonest opinions, computes shared reputation component, and updates the reputation of advising buying peers.Figure 9 illustrates a snapshot of MATLAB showing updated individual reputation of the seller after purchase, filtering of opinions, and updating of advisors' reputation.

Figure 9
illustrates a snapshot of MATLAB showing updated individual reputation of the seller after purchase, filtering of opinions, and updating of advisors' reputation.

Figure 9 :
Figure 9: Computing individual and shared reputation The buyer b 1 combined individual and shared reputation components to compute the overall reputation of seller s 6 as illustrated in Figure 10 below.

Figure 10 :
Figure 10: Updated reputation of buying peers as advisors

Figure 14 :
Figure 14: Reducing impact of ballot stuffing (BS) with increase in mutual past experience

Figure 15 :
Figure 15: Filtering out unfair opinions in other systems and in the AMARR system

Table 1 :
Comparison of attack-resilience in AMARR system with literature work