Agent Based Modelling and Validations Flashcards
(21 cards)
What is agent based modelling?
a method used to investigate the emergence of population-level patterns and behaviours by simulating the actions of all the individual members of that population
micro-level simulation to derive macro-level system behaviour
What are the elements of agent based modelling
agents
* independent components of the simulated system
* entities that have attributes, goals and behaviours
ruleset
* the set of rules that determines how agents interact with each other and their environment
environment
* the environment in which the agents exist and operate
time
* each agent’s state is updated iteratively
What are the agent’s characteristics?
modularity
* agents are separate from one another and the environment
* agents are self-contained: they have identifiable characteristics, behaviours, and decision-making
capabilities
autonomy
* agents are autonomous and self-directed: they independently interact with their environment and with other agents
* an agent’s behaviour links the information it senses from its environment and interactions to its decisions and actions
sociality
* agents are social and interact with other agents by exchanging information and influencing them
conditionality
* an agent has a state (a set of internal variables) that varies over time
What are some examples of environments in agent based modelling?
agents operate within an environment:
Spatial
* Euclidean space: agents roam around in 2D or 3D space
* Geographic Information Systems (GIS): agents move around realistic geospatial landscapes
logical
* e.g. humans operating in a social network
* e.g. sales agents interacting in a market
* e.g. ant colony optimisation
organisational
* e.g. food chain
Describe agent relationship
there may be a large number of agents in the system
interactions between agents are limited by some concept of proximity
* to represent the system being simulated
* or perhaps to limit computational cost
agents can have limited information about each other and may not predict each other’s actions
agent topologies define how agents interact with each other, e.g.:
soup
* agents have no spatial/location attributes
spatial proximity
* in 2D or 3D space
networks: either static or dynamic
* web: agents can directly interact with all other agents
* star: agents interact directly only with coordinator agents
* grid: agents interact directly only with their neighbours
* HCAN (hierarchical collective agent network): a layered system, where agents interact only with
agents in higher or lower layers
What is a web topology?
web topology
* agents can directly interact with all other agents
* all agents have the same internal structure, capabilities, operation goals, domain
knowledge and possible actions
What is a star topology?
star topology
* agents rely on coordinator agents to send and receive information
* agents in these groups can directly interact only with the members of their group; the coordinators provide connections to other groups
* each group of agents may performs a different task
What is a grid topology?
grid topology
* agents can only interact with other agents in their neighbourhood
* like the star topology, the grid may consist of areas, each with its dedicated coordinator agent and agents access neighbouring areas through their coordinators
What is a HCAN (Hierarchical Collective Agent Network) topology?
HCAN (hierarchical collective agent network) topology
* groups of agents make up hierarchically organized layers
* agents within the same layer are not connected to each other but are connected to agents in adjacent layers
What are some examples of the purpose of agent based modelling?
different reasons for using it
* to understand the agent behaviours
* try different rulesets until the behaviours mimic the real world
* metrics like averages or variance cannot adequately describe behaviours
* to investigate and understand group behaviours
* to test theories about a complex system
* to test and develop strategies that change the group behaviours
* where real-world experiments are not possible
* e.g. testing epidemic control measures
What are the challenges of agent based modelling?
computational complexity
* particularly with large numbers of agents or complex interactions
calibration and validation
* sourcing/defining the underlying rulesets
* may require extensive data collection
* small changes to parameter values can lead to significantly different outcomes
* emergent behaviour that arises from simple agent rules makes behaviours hard to
predict
interpreting results
* understanding how individual behaviours contribute to the overall system dynamics
How long is one simulation?
long enough to complete a task
* one day of patients in a hospital outpatient clinic
* mapping one building using a drone swarm
* one training exercise for an air traffic controller
long enough to identify/learn a pattern of behaviour
* spread of foot and mouth disease
* understanding traffic flow
+ timeout
What is simulation for?
- investigate, test and understand (range of) behaviours of the real system
- try different simulation systems/parameterisations until the behaviours mimic the real world system
- test theories about the real system
- test and develop strategies that could change the real system behaviours
- develop standard operating procedures
- where real-world experiments are not possible
- training
- where you want to control environmental factors, e.g. the weather
- for dangerous or expensive environments, e.g. flying
- optimisation
- training AI models
What is verification and validation?
verification
* are you building it right?
* have I coded my programs correctly?
validation
* are you building the right thing?
* does the simulation accurately represent what I am simulating?
* subjective vs objective
What are the three considerations in validation?
validation
* compare the model and its behaviour with the real system
calibration
* tuning the model and its parameters to better fit the real system
model accuracy
* trade-off between accuracy and effort involved in validation and calibration
* no simulation is 100% accurate
* … but it does need to be “sufficiently” accurate
What is the three step approach to validation and calibration?
- build a model with high “face validity”
- validate the model assumptions
- compare the model performance with the real system
What does face validity mean in validation and calibration?
does the model appear reasonable to experts of the real system?
* experts should be involved in the construction of the conceptual model
* particularly important when it is impossible to collect data
do components of the model behave in reasonable ways?
* in the hospital outpatient clinic coursework, increasing the arrival rate of patients
would be expected to increase waiting times
* increasing the probability of disease transmission between two cows would be
expected to increase the number of cows catching the disease
high face validity is important:
* high degree of realism as far as the users are concerned
* more likely acceptance of the simulation results by the expert community
* credibility
What are the two types and reasons for model assumptions?
two types of assumptions:
structural
* does the system operate correctly?
* e.g. do cows perform random walks?
data
* are the data and statistical assumptions correct?
* e.g. is the probability of transmission of foot and mouth between two cows correct?
reasons for assumptions:
simplification
* cows random walking rather than herds and friendship groups
unknown truth
* we do not know how cows really move about
impossible to collect data
* cows are inquisitive and current sensors cause the cows to move unusually
What are the four ways for comparison with the real system?
predict actions of the real system
* compare simulation outcomes with historical performance data
(where data reserved only for this purpose)
* this is the only objective measure of simulation validity
detailed structured walk-through
* manually analyse every decision point within a simulation run
e.g. follow individual agents in an agent-based model
* the only realistic option if the real system does not yet exist
visualisation
* plausibility tests, e.g. inspection for degenerate cases
* “Turing Test Validation”: experts presented with real and simulated visualisations are
asked whether they can discriminate between the two
statistical hypothesis testing
* how likely is it that the simulation outcomes are from the same probability
distribution to the real outcomes?
* requires lots of real data
Explain validation: data analytics
explore the data collected or generated by the simulation to help
discover any anomalies
potential difficulties:
* there are no universal methods for data analytics
* it requires data wrangling skills in addition to simulation skills
Explain validation:docking
compare the outputs of two independently developed simulations
* if the models use the same theory, then their
simulations should produce similar outputs
* beneficial for the conceptual models to be
developed independently, not just the
implementation
three possible positive outcomes from docking:
identity
* the outputs of the two models are indistinguishable
distributional
* the results of the two models are statistically indistinguishable
relational
* the results of the two models show that similar changes in inputs cause similar relational changes in outputs
potential difficulties:
* groupthink: the models are based on the same false theory
* if the models do not agree, which one is wrong?
* perhaps they are both wrong!