Allocating Sensor Network Resources Using an Auction-Based Protocol

Wireless sensor networks are being increasingly used for remote environmental monitoring. Despite advances in technology, there will always be a disparity between the number of competing sensor devices and the amount of network resources available. Auction-based strategies have been used in numerous applications to provide efficient/optimal solutions for determining how to fairly distribute system resources. This paper investigates the suitability of using online auctions to allow sensors to acquire preferential access to network resources. A framework is presented that allocates network priority to sensor devices based on their characteristics such as cost, precision, location, significant changes to readings, and amount of data collected. These characteristics are combined to form the value for a particular sensor’s bid in an auction. The sensor with the highest bid wins preferential access to the network. Priority can be dynamically updated over time with regard to these characteristics, changing conditions for the phenomenon under observation, and also with input from a back-end environmental model. We present an example scenario for monitoring a flood’s progress down a river to illustrate how the proposed auction-based system operates. A series of simulations were undertaken with a preliminary auction structure to examine how the system functions under different conditions.


Introduction
Effective remote monitoring of sensitive ecosystems is vital to ensuring their sustainability into the future.Wireless sensor networks (WSNs) are gaining popularity for use in environmental measurement and monitoring initiatives due to their versatility and scalability.Advances in technology have led to high-speed broadband, cloud storage, pervasive computing, and semantically-enabled networked devices (i.e., "The Internet of Things") [3].This makes sensor network systems more affordable and ubiquitous, thereby allowing the collection of more accurate and larger datasets on phenomena under study.Environmental scientists are set to greatly benefit from the increased knowledge brought about by the data gathered through sensor networks, which will facilitate the development of effective management policiesparticularly those that address natural disasters.
However, despite increases in network capacity (i.e., bandwidth), the number of sensor devices also grows proportionally, which maintains an inequality between required bandwidth and actual bandwidth for transmitting large amounts of sensed data.This problem continues to limit the potential of environmental monitoring systems because the lack in required capacity leads to data loss at the point of collection.Therefore, dynamic decisions must be made about the allocation of scarce network resources amongst the competing devices.This predicament cannot be addressed through advancements in hardware technology alonea smart software solution is required.
Trevathan et al. [24] describe an initiative to create intelligent, low-cost WSNs for environmental monitoring applications (referred to as Smart Environmental Monitoring and Analysis Technologies (SEMAT)).The SEMAT WSN infrastructure allows sensors to interface with a back-end environmental model in near real-time to facilitate reasoning over the collected data.As such, sensors can be dynamically re-tasked and the environmental model can evolve as new data is collected by the system.This enables the WSN to give priority to sensors collecting the most relevant data regarding a phenomenon, rather than being deluged with unimportant details.It is possible to integrate an intelligent WSN's infrastructure with an auction-based mechanism for resource allocation.Through dynamically updating sensor priorities, it is envisaged that the system could be extended to enable phenomenon tracking, which has potential applications in natural disaster management, environmental monitoring, and security.This paper examines the applicability and suitability of using online auctioning mechanisms to improve resourcescheduling outcomes for WSNs.We present a basic framework for using auctions to allocate resources in a lowcost, intelligent SEMAT WSN [24].Priority is allocated to devices based on their characteristics such as cost, precision, importance of location, significant changes to regular readings, and amount of data collected in terms of whether a sensor device's memory buffer is full.Individual priorities are dynamically updated over time (i.e., after each auction) with regard to these characteristics, changing conditions for the phenomenon under observation, and also with input from a back-end environmental model.This allows the WSN to potentially track the progress of a phenomenon throughout a geographical area [1], [2].Note that this is different to other market-based approaches in that the auction process is linked to an environmental model, which can significantly bias the outcome in favor of certain bidders (sensors).Auction-based strategies for task allocation have been addressed in the literature before [9], [15]- [17], [27].However, this is purely from a theoretical economic perspective.This paper focuses on the practical implications for an auction-based allocation scheme including the types of possible auctioning structures, the protocol, and the practical issues with implementing such a system using existing network technologies.We present an example scenario that simulates a flood moving down a river to illustrate how the system operates.
The paper is organized as follows: Section 2 presents the problem motivation and related work.Section 3 describes the framework for using an auctioning mechanism for resource allocation in sensor networks.Section 4 discusses the methodology and construction of the proposed system.Section 5 presents the results from simulations to evaluate the system's performance.Section 6 provides some concluding remarks.

Problem Motivation and Related Work
This section outlines the original problem motivation and existing literature related to our work.

Using E-Commerce for Dynamic Resource Allocation
Previous SEMAT [24] environmental monitoring deployments have been based on a hierarchical WSN model (see Figure 1).In the SEMAT WSN there is a central base station that communicates with a number of sensor devices.A sensor device in this context refers to an apparatus that takes a specific reading on one particular phenomenon (e.g., light, temperature, salinity, pressure, etc.).Each device has some form of onboard processing power, storage capacity and communication ability.For example, in the SEMAT deployments each sensor device is connected to a Gumstix computer-on-module (Site 1) or an Arduino micro controller (Site 2) that utilize wi-fi to communicate to the base station.A unique characteristic of SEMAT model is the presence of a back-end environmental model that consumes the collected data.This allows the environmental model to be updated over time as the data is collected (e.g., update conditional probability tables in a Bayesian network).For the purposes of this paper, it is assumed that the system has real-time, two-way communication between the base station and the sensor devices.Furthermore, it is assumed that power management is not a concern, and all network issues (e.g., routing, etc.) have been resolved.
Consider an example application with a safety critical sensor network for flood monitoring that contains thousands of devices across a river system.While each sensor may be trying to send data over the network (to a base station) about an approaching flood, the system must decide which sensors are the most important to allocate priority network resources to.If each sensor is simultaneously transmitting, network congestion will ensue.This directly impacts the timely dissemination of critical information.In a flood situation such information might dictate who should be evacuated first, which locations rescue crews should be assigned to, or what areas of economic interest should be protected first.(This scenario is applicable to bushfires, tsunami, earthquakes, lava flow, people/animal tracking, oil spills, bio-threat monitoring, etc.) Figure 1: The intelligent environmental sensor network infrastructure At present the only approach is to assign priorities before deploying devices [22].For example, if there is a house in a particular area, sensors near the house are given higher priority, and therefore have first access to bandwidth.However, this does not allow dynamic updating of priorities in response to changing conditions.The other option is to wait until data is received at a centralized source, have a program or human make sense of it, and then reconfigure the sensors' priorities remotely (assuming the system is able to support remote operation).As such there are two types of sensor network deployments to consider: Static -These are one-off systems that will be deployed for a set period and then removed.No changes to the system in any form will occur during the period.Dynamic -A long-term sensor network deployment where new devices or modification to existing devices occurs.Furthermore, existing sensor devices may also be removed.
Static deployments establish the network allocation policy apriori.Therefore, the auction-based approach is not really relevant in their context.Instead, this paper will focus on dynamic deployments.
Consider the new upcoming generation of sensors that can do some data processing and have intermediate storage (as in the SEMAT project [24]).Access to the network resources could be modelled economically where the commodity being sought is bandwidth.The currency is priorityor the right to have first access to the bandwidth.The issue to be resolved is how to allow a sensor to determine what priority it has either independently and/or have its priority dynamically updated by the base station depending on the sensor properties (i.e., cost, precision, location, changes in readings, buffer size) and the observed actions of the phenomenon under study.Assume all sensors are the same type and cost (e.g., all $20 water level sensors) and have onboard computing power.If one sensor starts reading significant changes to what it has regularly been monitoring, then its priority

Related Work -Network Congestion Management and E-Commerce Strategies
Prioritization techniques for event tracking are ultimately a subset of network congestion management.Congestion occurs when the amount of information being transmitted throughout a network exceeds the available resources.Congestion causes lost data from buffer overflows (i.e., node-level congestion) and/or dropped packets from transmission errors (i.e., link-level congestion).Congestion results in low data throughput, low quality of service, and excessive power consumption due to retransmission attempts.While congestion management has been studied indepth for traditional networks, WSNs present new challenges given the finite nature of memory/computation and the environmental conditions under which sensor networks are deployed.
Congestion management approaches can be considered as either being active or reactive.In an active scheme, the system monitors its throughput status and tries to ensure that congestion does not occur.A reactive scheme deals with congestion once it has occurred.Such schemes can also be classified according to whether they are global or local.In a global scheme there is a central authority that is monitoring and managing congestion.Alternately, a local approach allows nodes within the network to collaborate and manage congestion at a local level.The type of approach employed could depend on the size of the network, the computational capabilities of the network nodes, and the application for which the network is being employed.
A key concept for congestion management is a feedback loop.Consider an analogy whereby a commuter is driving along a busy motorway.An authority is monitoring the motorway via video cameras and has a series of electronic sign posts at various locations throughout the motorway to notify motorists of important information.If the authority observes a crash that is resulting in traffic congestion for several kilometers, then the authority can notify motorists heading towards the affected area that there is congestion ahead and to proceed with caution.This analogy illustrates the function of a feedback loop.In a network sense, there is an application that is monitoring the system which provides information back to the network about the congestion status.This could operate at a global level (i.e., the motorway example) or at a local level whereby nodes transmit information about local conditions.Chen and Yang [6] propose a congestion avoidance scheme based on light-weight buffer management.The approach adapts the sensor's forwarding rates to reduce congestion.They implement the buffer-based congestion avoidance with different Media Access Control (MAC) protocols including Carrier Sense Multiple Access (CSMA) and Time Division Multiple Access (TDMA).Their scheme is based on local feedback from intermediate nodes that use Hidden Markov Models and other probabilistic techniques.They evaluated their scheme via simulations.
Lim et al. [14] propose the Adaptive Distributed Resource Allocation (ADRA) scheme, an adaptive approach for distributed resource allocation in WSNs.Their scheme uses local actions performed by individual sensor nodes for node management.Each node adapts its operation over time in response to the status and feedback of its neighboring nodes.They study the effectiveness of the ADRA scheme for sensor mode management in an acoustic sensor network to track vehicle movement.The scheme is evaluated via simulations, and also prototyped it using the Crossbow MICA2 motes.They claim the ADRA scheme provides a good trade-off between performance objectives such as coverage area, power consumption, and network lifetime.

Using Online Auctions for Resource Allocation in Sensor Networks
This section discusses the challenges for using auctioning mechanisms in WSNs and presents the justification for the auction structure proposed in this paper.

Contrasting Auction Types
Some of the major types of auctions include: English [10] -The price is competitively bid upwards until no new bids are submitted after a given time-out period (referred to as an open bid auction).The winner is the bidder with the highest price (referred to as a first price auction).
Vickrey [25] -Bids are submitted in secret (referred to as a sealed bid auction).The winner is the bidder with the highest price.However, s/he only pays the second highest price for the item.
Dutch [7] -The price is continually offered downwards by the seller (referred to as a reverse auction).The winner is the first bidder to accept the going price.This style of auction is commonly used as a tendering process for large projects.
Continuous Double Auction (CDA) [8], [23] -There are many buyers and sellers who are continuously trading multiple items (e.g., a stock exchange).
In terms of WSN resource allocation, several rounds of bidding are not required as in an English auction.Having several rounds of bidding would complicate the system and use limited resources to conduct.There is no requirement that bids remain secret as in a Vickrey auction, therefore bids do not need to be sealed.Dutch auctions do not seem applicable in the context of WSNs and also require several rounds of bidding.While a CDA is the most versatile in obtaining the functionality raised in Section 2, it has more complicated rules than the other auction types and imposes more significant technical challenges for a WSN.
Due to the aforementioned reasoning, the auction type chosen for this paper is a form of first price, open bid auction (a mixture of English and Vickrey).Each sensor device can only bid once during an auction (i.e., the auction is a

Auction Structures
As sensor networks can be constructed using various topologies, the exact structure of the auction can vary.The following are the structures that could facilitate the aforementioned auction types: Centralized auction house -There is a central auctioneer, A, to whom bidders (b1, b2, ..., bn) submit bids to (Figure 2 (a)).
Distributed auction house -There are several auction houses, A1 and A2 (Figure 2 (b)).Each auction house could be responsible for a particular section of the WSN, or can be used in unison for implementing cryptographic protocols for security in electronic auctions [23].
Peer auction house -The peer approach is the alternate extreme where all bidders can exchange resources amongst themselves (e.g., a clearing house or CDA) (Figure 2 (c)).
Hybrid auction house -In a large WSN the entire community could be made up of all the aforementioned structures.The individual auctions could also be of varying auction types (e.g., English, CDA, etc.) (Figure 2 (d)).
Figure 2: Differing auction structures that could be employed for a sensor network For this paper, a SEMAT-style [24] deployment is used (e.g., hierarchical with a single base station and multiple sensor devices possessing some storage and computational ability).To keep things simple, the auction structure is a centralized auction house, where the auctioneer is the base station and sensor devices are the bidders.This approach facilitates a global, reactive congestion and object-tracking mechanism.

Auction Phases
There are several phases common to all auction types.These include initialization (where the auction is set up), bidding (bids are submitted), and winner determination.Some auction types such as CDAs perform these processes repeatedly.The following describes how these processes relate to sensor networks and the proposed system: Initialization -There are three initialization types in a dynamic deployment with regard to sensor devices: 1. Deploying all sensor devices as part of establishing the WSN; 2. Deploying new sensor devices throughout the WSN's life span; and

Modifying existing sensor devices
Bidding -Sensor devices submit bids to the auctioneer.
Winner Determination -The auctioneer determines the winner and allocates network resources accordingly.
With regard to this paper, initialization is performed as a one-off process at the outset of the WSN's deployment.We will not consider physical additions or modifications to the WSN after deployment.The only modification is updating each device's priority for network resources (hence this is really a semi-dynamic deployment).A series of auctions will be conducted throughout the deployment's life at predefined intervals.After winner determination, sensors will have their priorities updated with regard to their placement in the auction process.Note that all sensors will continue to transmit their sensor data in-between auction rounds.However, an individual sensor's priority to transmit will change from round to round.The specifics of this process are described in the next section.

Mathematical Model
The goal for the simulated results (presented in Section 5) is to observe how the system performs while monitoring a flood's progress as it moves down a river (see Figure 4).Conceivably such a scenario could alert authorities to areas where people and property are in danger, or trigger flood gates to take action to avoid economic loss.
Geographical area L is divided into a 2-dimensional grid, (M, N) where (i, j) denotes the location (positioning) of the cell in the ith row and jth column.
The phenomenon under study (such as a flood or bush fire) is denoted as P = {(p 1 , q 1 ), (p 2 , q 2 ), ...} P L (2) Figure 4: An example scenario with monitoring the progress of a flood down a river In the proposed example a river R, flows through the area.Assuming that a flood that is moving down a river is completely contained within the river, then P R.There is a set of sensor devices S = {s 1 , s 2 , ..., s n }. s i (1 ≤ i ≤ n) indicates the ith sensor's geographical location within L. The commodity up for auction is prioritythe right for a sensor device to acquire first access to the network for the required resources it needs to satisfy its currently assigned peak load.All sensor devices have balance of currency, denoted as $, with which to bid for bandwidth.
$ is allocated to, or taken from devices by the bank based on a specific sensor device's characteristics.

Experimental Setup
In the proposed example there are 100 cells arranged in a square array (i.e., M = N = 10).The river occupies 34 cells with R = {(i 1 , j 1 ), ..., i 34 R , j 34 R )}.The flood (or phenomena) is represented as a software agent.The flood occupies k cells with k ≤ 34, Note that F R. Time is denoted as T = {t 1 , ..., t i , ..., t I }. Figure 3 shows the direction the river is moving in.The front or leading edge of the flood changes position over time.This is randomly allocated to be between 1 unit and 4 units per time measure.I is determined by how quickly the flood moves downstream (e.g., based on the random allocation of the leading edge during each t i ).The auction house is represented as a software agent that resides on the base station.An auction is conducted for each unit of time.j is bounded by how fast the flood progresses down the river.The simulation terminates once the flood has progressed past point p m in L.
In this paper we assume that all sensor devices are homogeneous and sample the same thing (i.e., water level).Each device takes samples according to a sensing interval, which is identical for all sensors in this example.Each sensor device is represented as a software bidding agent.There are eight sensor devices that occupy one square each (i.e., n = 8).We assume that two sensor devices cannot occupy the same location.

Sensor Characteristics
Each sensor device has the following characteristics: Precision α (0 ≤ α ≤ 1) -This refers to how many significant digits the device measures.The sensor device's importance will depend on the application (as evaluated by the environmental model).
Cost β (0 ≤ β ≤ 1) -In general, given two devices that measure the same characteristic, the data collected from the more expensive device would be more valuable/accurate for the application.Cost can be normalized between the least expensive s i min and the most expensive s i max sensor devices as follows: (3) Change in Readings γ (0 ≤ γ ≤ 1) -When a significant change to the currently observed readings occurs, this suggests that there has been a change in the phenomenon under study and therefore this sensor should be given attention.Each sensor collects one reading per time unit, R = {r 1 , r 2 , ..., r j }.This can be represented by a threshold Δ which indicates how far from the mean reading the newly observed reading is: . For the purpose of this example, Δ is binary in that either there is a change or no change in the latest observed reading in time period t i .

The Environmental Model
The environmental model dictates the rules for the WSN and focuses on a specific set of features.For example, in the [24] deployments the environmental model was examining causal factors for algal blooms in marine environments [11].The environmental model accepts input from the sensors in the form of sensor readings and also the γ and ε factors.The model uses its underlying algorithms to deduce values for α, β and δ.These are passed back to the auctioneer to evaluate $ for each sensor device.The rule for the environmental model in the flood example is: $ should be highest for the current s i observing the phenomenon and the next one down river expecting to encounter it.
A decision-maker, a government agency, or a company with an interest in the data from the WSN can pay money to increase the priority of a sensor (or group of sensors).Furthermore, the environmental model can be extended by tracking algorithms such as those proposed by [1], [2], [13].

Implementing a Priority-Based Auction Scheme
The properties of the CSMA protocol can be used to implement a priority-based auction scheme.CSMA is a probabilistic media access control protocol that allows all devices to transmit at any time on a network medium.If two devices attempt to transmit at the same time, then a collision occurs.The devices are then forced to back off for a random time period and attempt to re-transmit later.This protocol can be used to implement prioritized access to the network.The priority of the device determines how long a device is forced to wait before attempting a retransmit.For example, there are two devices s 1 and s 2 , where s 1 has higher priority.If a collision is detected, both stop transmitting and wait.As s 1 is of higher priority, the amount of time it has to wait before retransmitting is lower than that of s 2 .The priority approach can be implemented using the P-persistent mode in CSMA/CD.When the sender is ready to send data, it checks continually if the medium is busy.If the medium becomes idle, the sender transmits a frame with a probability p.If the station chooses not to transmit (the probability of this event is 1-p), the sender waits until the next available time slot and transmits again with the same probability p.This process repeats until the frame is sent or some other sender stops transmitting.In the latter case the sender monitors the channel, and when idle, transmits with a probability p, etc.

The Auction Protocol
Figure 5 illustrates the auction protocol and each phase is described below.Initialization is performed once at the deployment of the WSN.The remaining phases are performed repeatedly.There is one auction for each t i T. Prior to each auction, the environmental model is updated with the sensor readings received in that round (which influences a sensor's α, β, δ values).There are two types of data transmitted by a sensor.The first is the sensor's readings.A sensor will attempt to transmit this data continually regardless of its priority.The second type of transmission is a sensor's bid data.A sensor's bid data is only transmitted as needed to conduct an auction round.The auction protocol contains the following steps within the three major auction stages: Initialization -Each sensor device is assigned its $ value according to the environmental model's evaluation of (α, β, δ).γ and ε are initially excluded (i.e., set to 0) as no data has been sampled yet.
Winner Determination -The auctioneer combines the bid data received from each bidder and the environmental model's evaluation (i.e., (α, β, δ, γ, ε)) to determine $ for each bidder (Step 5).Each bidder is allocated p for its CSMA wait time as follows: p = 1 -$.p is sent to each particular sensor.Sensors then resume transmitting their sensor data according to their received priority.If a lower priority sensor transmits its sensor data when a higher priority sensor is trying to transmit, the lower priority sensor is forced to back off.The environmental model is updated by the new sensor readings it receives from the prioritized devices.This data influences future auction rounds based on its significance to the environmental model and/or whether someone is willing to pay money in an attempt to boost a sensor's priority.

Results and Discussion
The OmNetpp network simulator was used to test the auction method.Performance is dependent on the core MAC layer method that has been designed to control CSMA/CA network contention.The simulations use the default IEEE 802.15.4 MAC layer as a baseline for comparison, and tests how the proposed auction method behaves with continuous traffic congestion and temporal (event driven or converging) traffic.To test congestion/contention, the simulations vary the traffic loading (packets generated per second).The simulation results metrics (throughput, delivery (errors), and delay) provide evidence of the auction method's quality and performance improvements.

Performance Criteria
The quality and performance criteria involved in evaluation of the auction method are: Baseline vs. MAC layer method vs. auction method vs. real world scenario phases -These simulation phases allow for comparisons to validate the method within different system modules and conditions; and Node-based vs. network-based simulations -Permits analyzing traffic performance results between lowlevel individual node prioritization compared to continuous higher-level network loading (congestion); Figure 6: Simulation phases and hierarchy Figure 6 illustrates how the comparison of different phases for evaluating the network results requires building a baseline default network, which all other method results are compared against.The MAC layer phase simulates the low-level (localized nodes) network contention of the IEEE 802.15.4.The auction method phase simulates the highlevel (nodes resources) auction control which controls the MAC layer method.The real world phase simulates the flood scenario of each node in sequence will generate data with temporal high priority traffic.These simulation phases provide an incremental evaluation of the proposed method: Base-line phase -Build a network simulation with static default MAC layer parameters for all nodes to control network congestion; MAC Layer method phase -Configure static MAC layer parameters (macMinBE, macMaxBE and macMaxCSMABack-offs) for prioritizing all nodes to control network congestion; Auction method phase -Develop an auction method to use characteristics to control the MAC layer which in turn controls network congestion; and Real world scenario phase -Dynamically change auction characteristics to prioritise each node over time to monitor the delivery and delay performance results.
Testing the node-based (local) performance is more important compared to network-based (global) performance tests.This allows evaluation of how each node is performing individually and relative to other nodes in the network.The network-based simulations are beneficial for a final validation of the whole method performance.The different simulation levels of testing the performance of the WSN include: Node-based simulations -Evaluates how the auction method affects each node to prioritize its traffic while maintaining a static 80% network loading (packet rate) to provide network contention.The 80% network loading was chosen to provide enough congestion, but not affect the results through saturation and temporal congestion.Simulations results are averaged over multiple runs; and Network-based simulations -Evaluates how the auction method affects the network congestion due to changing network loading (packet rate).This mainly tests the method's performance near the start of network saturation (100% congested) and is used for indicating the method's effectiveness on continuous congestion.The network-based single run simulation is used to compare the method results to the baseline results, in order to provide an overall gauge of the method's performance.The network performance test is carried out only after the entire node simulations have been completed, and has the same setup of network parameters (different priorities between nodes) as the node simulations.The network performance simulation is used to validate the node simulations.
The throughput and delivery rate performance metrics gauge the method's reliability and bandwidth utilization, whereas, the delay metric helps to determine the fairness and Quality of Service (QoS) of the network traffic: Throughput (bps) -Number of bits (low level) received over a period of time; Delivery Rate (%) -Number of valid packets (high level) received compared to what is transmitted.This indirectly indicates the number of receiving errors (i.e., dropped packets); and Delay period (s) -Time between creating and receiving a valid packet.

Auction Simulation Goals
The goals and expectations for analyzing the auction method's simulation results are: To focus on improving network performance, i.e., not simulating all real world resource situations (location dependencies, buffering levels, and changes between auction periods); To successfully simulate a WSN and contention representative of real world network behavior and traffic; To improve method performance compared to the baseline default IEEE 802.15.4 CSMA/CA; Prioritized traffic will improve throughput and delay depending on node priority; Effects of increasing congestion (traffic load) are: increased contention which increases collisions, and increased throughput, dropped frames, and delay; and

Assumptions
The reliability of the throughput and delivery performance is a priority over the delay metric (end-to-end latency) of network traffic.The method has trade-offs between certain performance metrics, which results in improved delivery and reduced delay metric results.
Network traffic prioritization can be tested based on the foundation of the MAC layer phase.The auction phase is a confirmation the auction method can control the critical MAC layer method.The majority of the simulations statically set the MAC layer parameters and the auction characteristics to thoroughly test the effects on the network.The simulation number of test runs and run time assist in removing the randomness of node and network results.The randomness is a result of the varied packet transmits period and the baseline default MAC layer contention (IEEE 802.15.4) back-off random generated values.
The expected improvement in the bandwidth utilization is located around the start of network traffic saturation.At full saturation the bandwidth is fully utilized with no improvement.Another expected improvement is a reduction in the delay in receiving traffic from the high priority nodes in comparison to the low priority nodes.

Simulation Results
The simulation tests involved comparing the baseline vs. auction method.This included establishing a static network which dynamically adjusts the packet generation interval and back-off values.For each baseline and method simulation there are individual tests that allow performance comparisons between node-based vs. network-based results.Each of the node-based vs. network-based simulation test records the network performance metrics.
The simulation results and analysis are grouped by each network performance metric (i.e., throughput, delivery, and delay).The results grouping allows for an easy means of analyzing the influences the MAC and auction method properties are having on the different network metrics, in order not to favor one metric (delivery) over another metric (delay) results.The order and details of the simulation tests include: The baseline node-based simulation running for 10 separate test runs for 30 seconds; The baseline network-based simulation running varied network traffic loading (0.25 -4 packets per second) test runs and each for 30 seconds; The MAC layer method is tested by running node-based simulations with the same default baseline simulation parameters and adjusting with a combination of macMinBE, macMaxBE and macMaxCSMABack-offs parameters over multiple different simulation tests.This is in order to improve all the performance metrics; The network-based performance simulations are run to validate the node-based results; and The auction method characteristics simulations confirm the control of the MAC layer method will provide the same results for prioritising the nodes.
In the MAC layer phase tests, each of the parameters (macMinBE, macMaxBE, macMaxCSMABack-offs) were adjusted within range, amount of scaling, and any prioritization between each node.Each adjustment required the node-based performance tests to be run over a number of times to provide averaged metric results.
The MAC method simulations involve prioritizing nodes by setting the MAC layer parameters (macMinBE, macMaxBE, and macMaxCSMABack-offs), with each parameter having the same importance as the other.As the macMinBE (back-off duration) decrease, the macMaxCSMABack-offs (number attempts) is increased proportionally.This is to ensure the total period of time to finally transmit is appropriate between different priorities, to reduce dropped frames and extensively long delays for low priorities.The parameters have very small ranges and the node's auction priority value is required to be scaled to fit within the range.
The simulations involved changing the MAC layer macMinBE (initial random exponent) parameter with different values and the prioritization between nodes that are within the range of macMinBE = [0 -7].The MAC layer macMaxBE (final random exponent) parameter is altered with different limits and prioritization between nodes that are within the range of macMaxBE = [5 -8] and macMaxBE >= macMinBE.The MAC layer macMaxCSMABack-offs (number of transmit attempts) parameter is changed with different limits and prioritization between nodes that are within the range of macMaxCSMABack-offs = [2 -8].
As expected, the node-based simulation results for both the auction and MAC layer methods are similar, as the auction characteristics set the node's priority, which controls the MAC layer parameters with the same values.The auction method simulations involve: Utilizing the MAC layer method to create parameters to prioritize the nodes; Simulation with each node set to one characteristic, with unique priority covering the entire range; Simulation with each node set to the same characteristic, with unique priority covering the entire range; Simulation with each node set to one characteristic with the same priorities and weightings; and Simulation with each node set to one characteristic with multiple priorities and weightings.

Throughput
The throughput metric defines the receiving congestion rate over time for each node and the entire network.The resolution chosen for the throughput metric is Bits Per Second (bps) and not the packet per second.As this is not the (5) Node-Based Simulation Results -The node's throughput result values for each simulation test run are averaged over all runs and the node's throughput mean is compared with the same nodes between the baseline and method.All test runs have randomness in generating packets and this variation can be viewed in the baseline results between nodes (Figure 7 (a)): (6) The method results have all nodes with a higher throughput compared to the baseline by an average increase = method -baseline / baseline = (684.13-591.96)/ 591.96 = 15.5%.This results from increased bandwidth utilization of each node being prioritized and the increased number of retries.As expected, the high priority nodes (node 0 being the highest) will have higher throughput compared to the low priority nodes (node 7 being the lowest).This is due to the prioritization of all MAC layer parameters between nodes and a decrease in dropped frames.
Network-Based Simulation Results -The network-based traffic throughput simulation (Figure 7 (b)), tests from the low, through to the high rates of network traffic being generated.Ideally, the network throughput has a linear gradient increase up to saturation and plateaus with saturation.The baseline does not have this behavior, as the baseline result is more curved before reaching the start of the plateau (start of saturation).The method results demonstrate that throughput is increased around the start of the network loading, through to saturation.The results have slight fluctuations, as the network simulations have a single run per traffic load setting (packet/second).The method's network throughput is increased because of the following reasons: the nodes do not have similar back-off timeout values, an increased utilization of the bandwidth (each node takes turn in accessing), less dropped packets and the high priority nodes wait less (increased number of small back-offs).The best case throughput improvement of ((6000 -5000) / 5000) * 100 = 20% is at the mid-point difference between the baseline and method (traffic load of 2.0 packets per second).

Delivery
The delivery metric defines the received (RX) vs. transmitted (TX) rate (useful for indicating error rate) for each node and the entire network.The delivery metric is critical compared to the delay metric, as it is more effective to have reliability (quality) over response time (performance).The number of received and transmitted bits is easily counted during the simulation, and when it finishes the delivery is calculated.The delivery metric is closely related to the throughput (received over time) metric and provides a similar increase in results: (7) Node-Based Simulation Results -The node's delivery result values for each simulation test run are averaged over all runs (Figure 8 (a)).The node's method delivery ratio mean values over a number of runs are compared to the same nodes of the baseline.(8) The delivery results show that all nodes have a higher delivery compared to the baseline, average increase = (method-baseline) / baseline = (69.9-60.4) / 60.4 = 15.7%.This is caused by the increased duration and retrying to transmit the frames before being dropped, resulting in a decreased number of dropped frames.The high priority nodes (node 0 being the highest) have a slightly higher delivery compared to the low priority nodes (node 7 being the lowest).This is a result of the prioritization of all MAC layer parameters between nodes.Lee  to high rates of network traffic.Ideally, the network delivery has an increase in a linear gradient from between no network load and saturation.The method improvement reduces the number of dropped packets the more saturated the network becomes when compared to the baseline as there less ability to fit more traffic during saturation (> 2.0 packets per second).This explains why there is greater improvement compared to the baseline at the lower network loading.The simulation of network contention for controlling sequential access to a single channel bandwidth is apparent, by observing the delivery rate change as the network load is increased up to saturation.As a result the nodes will increase the randomly dropped packets.The dropped packets are the effect of the number of retries being exceeded between the competing nodes, when attempting to access the channel.The efficient use of bandwidth and the decrease in dropped frames, results in increased delivery performance.The best case delivery improvement of ((65.7 -55.7) / 55.7) * 100 = 17.9% is at the mid-point difference, between the baseline and method (network traffic load of 2.0 packets per second).

Delay
The delay metric defines the duration (end-to-end latency) between the timestamp for creating a packet to be transmitted, and the timestamp of the received valid packet for each node.The delay metric helps to identify the total period of back-off re-attempts to transmit the packet for each prioritized node.For each packet being generated that is sent (no hops) to the gateway node, the packet delay is averaged for each node over the whole simulation run.The delay metric represents the response time for detecting an event in the problem domain.
Analysis of the delay metric performance is important.If only the throughput and delivery results were considered when evaluating the method's performance, the delay (mainly affecting low priority nodes) could increase substantially to have noticeably long responses (possibly in the seconds) in detecting events in the WSN.The negative effect is node buffering overflow problems occurring while packets are waiting to be transmitted.
When comparing the delivery and delay, the overall increase in delivery (and throughput) of all nodes compared to the baseline will increase the overall delay of all nodes as the number of prioritized transmissions increase.The method causes a large delivery increase while only causing a slight delay increase.This is a result of the increased number of transmissions (throughput) and the default behavior of the MAC layer being less likely to successfully transmit with small back-off timeouts (high priority nodes).The cause of this problem is that even though the back-off is initially set low for high priority nodes, the transmit attempts fail (back-off) and the BE increments (back-off range increases) for each attempt.The frame is transmitted at the larger back-off which is similar to the baseline.None of the MAC layer parameters can be adjusted to resolve this problem.When comparing each node's delays, the high priority nodes will have a shorter delay (end-to-end latency) compared to low priority nodes, as the MAC layer parameters are prioritized to favor a shorter back-off timeout, to be capable of transmitting sooner.(Figure 9 (a)).The node's method delay result (over a number of runs) is compared to the same node in the baseline results.The randomness of generating packets and back-off timeouts in the delay results needs to be considered when comparing nodes.The node delay results have an incremental increase between the high priority nodes (node 0) through to the low priority nodes (node 7).This is due to the prioritized number of transmit retries for each node.The extra delay for low priority nodes is a result of the increased back-off range, the IEEE 802.15.4 transmits backoff slot period (20 symbols) and the increased total attempts period to transmit the same frame.The method's results have a relative delay difference between high to low priority nodes with a delay increase = node 7node 0 = (0.1133 -0.0846) = 42ms.This is an acceptable amount of delay for the lowest priority node when compared to the transmit rate for a low priority node (approx.packet per 1 second).Each transmit attempt has no guarantee the actual back-off timeout is relative to the node's priority. ( The baseline delay result has the similar delay between nodes, as all nodes have same MAC layer parameters.As expected, the auction and MAC layer methods have similar results, as the auction characteristics set the node's priority, which controls the MAC layer parameters with the same values. Network-Based Simulation Results -The network-based traffic delay simulation (Figure 9 (b)) tests how three representative nodes are prioritized while adjusting the network traffic load being generated from low to high rates.
The network delay has a slight increase in the results as the network traffic load is increased.This slight increase in delay as traffic loading increases is the same rate for both the method and baseline.This demonstrates the method has the same performance as the baseline, apart from the relative increases for low priority nodes.The reason for this steady increase is the average delay of packets will increase as more packets are waiting to be transmitted.
The average delay has an extra increase for the highest priority node of (0.0815 -0.09) = 8.5ms (resulting from the extra throughput) and the lowest priority node of (0.0805 -0.1275) = -47ms --8.5ms (extra throughput) = 38.5ms,which is captured at the mid-point of the network traffic loading (packets per second = 2.0).The network-based results are similar to the node-based results and confirm that the delay results are valid.The average extra (added to the baseline average) delay is consistent throughout the network traffic loading range.This demonstrates the node delay will be consistent as congestion increases for each node.Unlike the throughput and delivery metrics, the network-based delay metric in Figure 10 shows the prioritization effects on each node to allow for analysis of the method's performance of each node throughout the network loading range.

Results Validation
The simulation times are validated by comparing different simulation times, without changing other settings with a single run of the baseline network.The test settings include: Bit Rate = 9600, Traffic Network Load = 1.7.The test monitors the network status (Table 1) to verify there is no large standard deviation in the results.The results have a small standard deviation for all the different simulation times.The ideal Sim Time is 30 seconds.The number of runs per test is validated by comparing the different number of runs in the node performance results (Table 2), and by averaging each metric result values over all runs.The test settings include: Sim Time = 15 and Network Load = 1.75 for node 0. This validates the mean of each metric that is fairly consistent with the number of runs increase and the standard deviation of each metric is decreasing, the more run data points are averaged.The ideal number of runs = 10 simulations.The traffic network load per test is validated by comparing different traffic network loads to generate a baseline of the IEEE802.15.4 to determine packet rate.This is so the network is congested without the network being saturated (Table 3).The test settings include: 8 nodes, Bit Rate = 9600 and Sim Time = 10.The ideal traffic load is 1.75 (packet/sec) and this provides a ~80% network loading for all nodes transmitting to the gateway node.

Real World Flood Scenario
This test simulates a flood progressing down a stream with the water height is changing over time.The simulation will confirm how changing each node's priority improves the performance relative to the neighboring nodes, and improves the reliability of the auction method's performance while controlling all the nodes.The simulation is focused on increasing the priority of three nodes, as the expected results can be repeated for other nodes.The contrast in the results between the nodes will be maximized and the simulation complexity will increase requiring the simulation to be run for a much longer period of time.All nodes are within range of each other and have a static traffic loading of around 80% to create network contention.As time passes (each auction period), a selected node processes the auction round with the new increased value for the auction's sensor sample change characteristic.This changes the node's priority for a period of time.For each auction period the simulation records, the results since the last auction period are compared to the baseline results.
The auction period is set to four seconds and the period will be of long enough duration to detect any changes in the The attempt to run the simulation to capture discrete results (vectors) throughout the simulation did not work as there was excessively random information for each message sent.The simulation and captured results were altered to gathering statistical summation results at the end of the simulation (end of the auction period).This allows for a single mean value (Figure 11    The real world simulations results have randomness errors (i.e., no guarantee the actual back-off per transmit attempt is relative to the node's priority).The trend in the metric over time demonstrates the results are improving.As expected, the delivery metric results do not change a large amount.The simulation tables (Tables 4, 5 and 6) group the results into delivery and delay performance metrics.The result tables show the relationship between the auction update periods and nodes in comparison to the baseline results.The comparison against the baseline helps to evaluate the method results, as if the method had not done anything for each auction period (round).The simulation results are compared between the method prioritization changes (Table 6) to the baseline (no priority change) to confirm that each node's performance improves for each packet being sent with a higher priority.The high priority nodes results have an increase in delivery (average 1.57%) and a decrease in the delay (average 9 ms) performance metrics for each auction period.As expected, each high priority node's metric improves, the longer the node is set to a higher priority (for each auction update).As mentioned, these metrics for each auction period will decrease as a result of the amount of time the node is at high priority, compared to the accumulated simulation time.These values still provide a good indication the method improves the performance, even in the worst-case situation with the network not having all neighboring nodes with unique priorities.
The simulations results are also compared with the method prioritization changes between the nodes at each auction period (Figure 12 (a)).The high priority node has a slightly increased delivery metric result compared to the lower priority nodes for each auction period.The higher priority node has a noticeable decrease in the delay metric result compared to the slight change in the delivery.The greater the number of nodes becoming higher priority will have less impact on the improvements in the metrics results of the higher priority nodes, compared to the lower priority nodes.As a consequence, the duration (number of auction periods) while being set to a higher priority, will need to be considered especially if a substantial amount of other neighboring nodes are also set to a high priority.Each node's results improve at different rates relative to other nodes (they have different starting metric values when increasing their priority).The individual node's results improve over time.
The real world simulations have the auction period results (RealWorldAU-Cx) accumulate the entire simulation run time up to each auction update point, which dynamically change the nodes priority throughout the simulation.As a comparison, a separate simulation test (Figure 12

Conclusions
This paper presents a framework for smart sensors that participate in online auctions to acquire WSN resources.A sensor's bid value is based on its characteristicscost, precision, importance of location, changes in regular readings, and amount of data collected.Priority is dynamically updated over time with regard to these characteristics, changes in the phenomenon under observation, whether someone is willing to pay for priority, and also with input from an environmental model.The environmental model allows the system to reason over collected data, and provides near real-time analysis for decision makers (who can also influence the priority based on economic decisions).We present an example scenario for monitoring a flood's progress down a river to illustrate how the system operates.A series of simulations were undertaken to examine how the system functions under different conditions.
The network control outcomes from the simulations are: 1) A configurable network simulation to test the proposed methods; 2) The ability to capture scalar and vector network results to calculate the performance metrics, in order to analyze and validate the auction method results; 3) Demonstration that the proposed auction method and MAC layer method produced similar improved network performance compared to the baseline 802.15.4.We found there is a trade-off between delivery and delay; and 4) The node simulation results improve the design and prove the MAC layer method to prioritize the nodes during contention.
The performance metrics outcomes from the simulations indicated: 1) The auction method increases the delivery while decreasing the delay results for the high priority nodes.The opposite applies for low priority nodes, based on the node-based results; 2) Node-based (constant traffic load) simulations provide a means to evaluate each node's performance relative to others and verifies that one metric improvement is not adversely affecting other metrics; 3) Network-based (variable traffic load) simulations show that the method can handle varied traffic loading and node simulation results have improved the network performance; 4) The node-based averaged throughput increased by 15.5% between the baseline and method results; 5) The network-based maximum throughput increase is 20.0% between the baseline and method around 2.0 packets per second network loading; 6) The node-based average delay compared between high priority nodes and low priority nodes is 42ms.This increase for the low priority node is acceptable in the problem domain; 7) The network-based delay results are similar to the node-based results between the baseline and method.The delay results provide a consistent value as traffic loading increases, indicating the auction method will behave the same as congestion increases; 8) The network throughput performance criteria has consistent improvements with the throughput metric and the delivery rate metric; and 9) We validated the method does not have excessive delays.
The simulation phases outcomes show: 1) The MAC layer method vs. baseline comparison indicates that the low level MAC layer control improves the local network performance, while independently not having to consider the auction method (i.e.WSN resources management); 2) The auction method vs. baseline comparison suggests the high level auction method improves the global network performance by generating similar improved results as the MAC layer method; 3) The real world simulations tests proved to be difficult to bias the MAC method to produce noticeable results for the higher priority nodes in the shorter durations; and 4) The real world vs. baseline results comparison indicates the individual node's method performance is improved over time (auction periods).
The real world simulations suggest that the higher priority nodes have improved method performance when compared to lower priority nodes.The same amount of data sent with increased delivery (less dropped) and decreased delay (shorter back-off).The simulation induces the worst case situation with most nodes, with the same priority.
Future work involves investigating different auction types (such as CDAs) and structures (e.g., peer-based) and their implications for sensor networks.Other work includes integrating more representative environmental models that operate in conjunction with semantic annotation and inference engines [12], [20] for event tracking throughout a geographical area.Additionally, more truly dynamic systems need to be examined where the sensor network configuration can change during the course of the deployment.Furthermore, starvation in a network is a serious issue that needs to be addressed.Finally, the proposed scheme does not address what to do in a tie-situation where two or more sensors have the same $ value.

Figure 3
Figure3illustrates the WSN auction structure.There are multiple sensor devices (i.e., the bidders).The base station (i.e., the auctioneer) collects bids submitted from the sensor devices.During the winner determination stage, the base station consults with a back-end environmental model to determine the priority ordering for the sensors.

Figure 3 :
Figure 3: The proposed WSN auction structure

Figure 5 :
Figure 5: The auction protocol

Figure 7 :
Figure 7: Node-based throughput mean results (a) and network throughput results (b)

Figure 8 :
Figure 8: Node delivery mean results (a) and network delivery results (b) Network-Based Simulation Results -The network-based traffic delivery simulation (Figure 8 (b)), tests low through

Figure 10 :
Figure 10: Real world flood auction currency simulation timeline (a)) to determine the performance of the auction method at the end of each auction period (grouped nodes results per auction period) labelled RealWorldAU-Cx and x =[1][2][3][4].The auction period results x = 1 occur when the simulation has run for 4 seconds until the other results for the incremental values of the x variable reach the end of the simulation.The auction period results (RealWorldAU-Cx) are the accumulation (incremental) of the entire simulation run time up to each auction update point, compared to one node with high priority for the entire simulation (Figure11 (b)).This also affects the final results in

Figure 11 :
Figure 11: Real world flood simulation delivery results (a) and delay results (b) (b) has only one node with higher priority (node 2dark green bar)) compared to all other nodes for the entire simulation run (not incrementally changing like the Real World auction update test).The results demonstrate how the length (30 seconds) of being high has more of an impact on the delivery and delay metric results.The delivery increase = (node 2initial low priority nodes mean) / low priority nodes mean = (81 -73) / 73 = 10.9% improvement compared to the average value of other low priority nodes.The Lee Hamilton Wayne Read Allocating Sensor Network Resources Using an Auction-Based Protocol Journal of Theoretical and Applied Electronic Commerce Research ISSN 0718-1876 Electronic Version VOL 11 / ISSUE 2 / MAY 2016 / 41-63 © 2016 Universidad de Talca -Chile This paper is available online at www.jtaer.comDOI: 10.4067/S0718-18762016000200005 delay decrease = (node 2low priority nodes mean) / low priority nodes mean = (0.094 -0.115) / 0.115 = 18.3% improvement compared to the average value of other low priority nodes.The large percentage change value of the delay metric is mainly due to the simulation only having one high priority node.All nodes metrics results are averaged over 10 runs and the delay metric is averaged for all packets for the entire simulation run.

Figure 12 :
Figure 12: Delivery results for prioritizing only node 2 (a) and delay results for prioritizing only node 2 (b)

Table 2 :
Validating simulation number of runs

Table 3 :
Validating simulation traffic loading Table 4 for the auction update results at the 8, 12, 16 seconds (auction points).The difference results in the table decrease over time.

Table 4 :
Real world flood simulation delivery results

Table 5 :
Real world flood simulation delay results

Table 6 :
Real world flood simulation results with changing priority per node over time