perfsim.traffic#
Module contents#
Submodules#
perfsim.traffic.load_generator module#
- class perfsim.traffic.load_generator.LoadGenerator(name, simulation, notify_observers_on_event=True)[source]#
Bases:
Observable
A LoadGenerator is an object responsible for not only generating a given list of traffic objects traffic_prototypes on a given simulation.cluster, but to control the state, events and time of the entire simulation.
- Parameters:
name (
str
) – Name of the LoadGenerator instance.simulation (
Simulation
) – The Simulation object where traffic belongs to.notify_observers_on_event (
bool
)
- threads_dict: Dict[str, ReplicaThread]#
All
ReplicaThread
instances in the simulation (in dict)
- arrivals: pd.DataFrame#
Stores requests arrival times
- before_traffic_start: str#
The event that is being triggered before traffic starts.
- before_generate_threads: str#
The event that is being triggered
- before_requests_start: str#
The event that is being triggered before request is generated.
- after_requests_start: str#
The event that is being triggered at the end of initiate_next_batch_of_requests in being triggered.
- before_exec_time_estimation: str#
The event that is being triggered before estimating threads execution time.
- before_executing_threads: str#
The event that is being triggered before start running threads.
- after_completing_load_generation: str#
The event that is being triggered after executing all threads/requests of this load generator.
- after_next_batch_arrival_time_calculation: str#
The event that is being triggered after next batch arrival time is being calculated.
- before_generate_request_threads: str#
The event that is being triggered before generating a thread for a request (within a subchain).
- after_generate_request_threads: str#
The event that is being triggered after generating a thread for a request (within a subchain).
- after_transmission_estimation: str#
The event that is being triggered after transmission completion time is being estimated.
- after_estimating_time_of_next_event: str#
The event that is being triggered after next event time is being estimated.
- before_transmit_requests_in_network: str#
The event that is being triggered before transmitting packets
- after_transmit_requests_in_network_and_load_balancing_threads: str#
The event that is being triggered after transmitting requests and load balancing threads on all hosts
- before_request_created: str#
The event that is being triggered after transmitting requests and load balancing threads on all hosts
- name: str#
Name of the LoadGenerator instance.
- sim: Simulation#
The
Simulation
object where traffic belongs to.
- requests: list[Request]#
All
Request
instances in the simulation
- threads: list[ReplicaThread]#
All
ReplicaThread
instances in the simulation
- latencies: pd.DataFrame#
Stores requests latencies
- property total_requests_count#
- property merged_arrival_table#
- register_events()[source]#
This is for performance optimization purposes. Instead of generating strings for each event, we can register the event_names as attributes, then we send the event_name as reference, instead of copying the string every time. This (slightly) improves performance, specially because we are calling the notify_observers method several times for each request during the simulation.
- Returns:
- execute_traffic()[source]#
This is the main function responsible to start the traffic in the cluster
- Param:
debug: Enable/disable debugging mode
- Return type:
[<class ‘perfsim.service_chain.replica_thread.ReplicaThread’>]
- property completed_requests: int#
Return number of all completed requests
- get_latencies_grouped_by_sfc()[source]#
Return list of latencies for all requests of the service chain with the given name
- Return type:
- plot_latencies(save_dir=None, marker='o', show_numbers=None, moving_average=False, save_values=False, show=True)[source]#
- Parameters:
show (
bool
)
- property last_transmission_id#
- property next_trans_completion_times#
- property requests_ready_for_thread_generation#
- property next_batch_arrival_time#
perfsim.traffic.request module#
- class perfsim.traffic.request.Request(request_id, iteration_id, id_in_iteration, load_generator, traffic_prototype, scm, arrival_time=0)[source]#
Bases:
Observable
Request class is used to represent a real request in a service chain. Note that a “node” here means a tuple of (subchain_id, microservice_endpoint_function).
- Parameters:
request_id (
str
)iteration_id (
int
)id_in_iteration (
int
)load_generator (
LoadGenerator
)traffic_prototype (
TrafficPrototype
)scm (
ServiceChainManager
)
- before_init_next_microservices: str#
- after_init_next_microservices: str#
- before_finalizing_subchain: str#
- before_concluding_request: str#
- before_init_transmission: str#
- after_init_transmission: str#
- on_init_transmission: str#
- before_finish_transmission: str#
- after_finish_transmission: str#
- load_generator: LoadGenerator#
The load generator that generated this request
- traffic_prototype: TrafficPrototype#
The traffic prototype in which this request is being created from
- scm: ServiceChainManager#
- latency: float#
- id: str#
Request ID
- iteration_id: int#
ID of the iteration that this request was generated in
- id_in_iteration: int#
ID of the request within the iteration that this request was generated in
- status: str#
The request status (IN_PROGRESS, COMPLETED, TIMED_OUT)
- register_events()[source]#
This is for performance optimization purposes. Instead of generating strings for each event, we can register the event_names as attributes, then we send the event_name as reference, instead of copying the string every time. This (slightly) improves performance, specially because we are calling the notify_observers method several times for each request during the simulation.
- Returns:
- set_next_nodes_and_replicas(next_nodes)[source]#
- Parameters:
next_nodes (
List
[Tuple
[int
,MicroserviceEndpointFunction
]])
- init_transmission(node_in_alt_graph)[source]#
- Parameters:
node_in_alt_graph (
Tuple
[int
,MicroserviceEndpointFunction
])- Return type:
int
- finish_transmission(node_in_alt_graph)[source]#
- Parameters:
node_in_alt_graph (
Tuple
)- Return type:
None
- init_next_microservices(subchain_id)[source]#
- Parameters:
subchain_id (
int
)- Return type:
List
[Tuple
[int
,MicroserviceReplica
]]
- static get_next_nodes_names(next_nodes)[source]#
- Parameters:
next_nodes (
List
[Tuple
[int
,MicroserviceEndpointFunction
]])
- property compute_times#
- property trans_times#
- property trans_exact_times#
- property trans_init_times#
- property current_nodes#
- property current_replicas_in_nodes#
- property subchains_status: List[None | str]#
- property next_replicas_in_nodes#
- property next_nodes#
- property trans_deltatimes#
perfsim.traffic.transmission module#
- class perfsim.traffic.transmission.Transmission(id, payload_size, src_replica, dst_replica, subchain_id_request_pair, recalculate_bandwidths_in_links=False)[source]#
Bases:
Observable
- Parameters:
id (
int
)payload_size (
float
)src_replica (
MicroserviceReplica
)dst_replica (
MicroserviceReplica
)subchain_id_request_pair (
Tuple
[Request
,int
])recalculate_bandwidths_in_links (
bool
)
- register_events()[source]#
This is for performance optimization purposes. Instead of generating strings for each event, we can register the event_names as attributes, then we send the event_name as reference, instead of copying the string every time. This (slightly) improves performance, specially because we are calling the notify_observers method several times for each request during the simulation.
- Returns:
- static recalc_bw_considering_err(bandwidth, error)[source]#
- Parameters:
bandwidth (
float
)error (
float
)
- property links: List#
- property source_nic#
- property dest_nic#
- property current_bw#