perfsim.prototypes

Contents

perfsim.prototypes#

Module contents#

Submodules#

perfsim.prototypes.affinity_prototype module#

class perfsim.prototypes.affinity_prototype.AffinityPrototype(name, affinity_microservices, antiaffinity_microservices, affinity_hosts, antiaffinity_hosts)[source]#

Bases: object

Parameters:
  • name (str)

  • affinity_microservices (List[str])

  • antiaffinity_microservices (List[str])

  • affinity_hosts (List[str])

  • antiaffinity_hosts (List[str])

static copy_to_dict(affinity_prototypes)[source]#
Parameters:

affinity_prototypes (Union[List[AffinityPrototype], Dict[str, AffinityPrototype]])

Return type:

dict[str, AffinityPrototype]

static from_config(conf)[source]#
Parameters:

conf (dict)

Return type:

Dict[str, AffinityPrototype]

perfsim.prototypes.base_prototype module#

class perfsim.prototypes.base_prototype.BasePrototype[source]#

Bases: ABC

static get_prototypes(subject, key, attribute, conf, existing_sm)[source]#
Parameters:

perfsim.prototypes.cluster_prototype module#

class perfsim.prototypes.cluster_prototype.ClusterPrototype(scenario_name, service_chains, topology, placement_algorithm, resource_allocation_scenarios, affinity_prototypes, simulation_scenario)[source]#

Bases: object

Parameters:
placement_scenario: PlacementAlgorithm#
traffic_prototypes_dict: Dict[str, TrafficPrototype]#
scms_dict: Dict[str, ServiceChainManager]#
scenario_name: str#
service_chains_dict: Dict[str, ServiceChain]#
topology: Topology#
resource_allocation_scenarios_dict: Dict[str, ResourceAllocationScenario]#
affinity_prototypes_dict: Dict[str, AffinityPrototype]#
simulation_scenario: SimulationScenario#

perfsim.prototypes.host_prototype module#

A Host object simulates a single host in a cluster. It has a single CPU and a single NIC. Number of cores in its CPU can be specifies using the cores_count property and its maximum network bandwidth can be specified with the max_bandwidth property.

class perfsim.prototypes.host_prototype.HostPrototype(name, cpu_core_count, cpu_clock_rate, memory_capacity, ram_speed, storage_capacity, storage_speed, network_bandwidth, sched_latency_ns=6, sched_min_granularity_ns=2, cfs_period_ns=100, cost_dict=None)[source]#

Bases: object

A Host may contain several Microservices. Here are the possible initialization parameters:

name

Name of the host. In example host1.

cores_count
Number of cores that this host’s CPU contains. It will create a CPU

with the given number of cores (can be accessed via self.Cpu). Currently it’s been assumed that each Host only has 1 CPU.

cpu_clock_rate

The maximum clock rate of the CPU for this host (in Hertz).

max_bandwidth

Maximum bandwidth that this host’s NIC can support. During initiallization, a Nic object with the given bandwidth is being created. (can be accessed via self.Nic)

Parameters:
  • name (str)

  • cpu_core_count (int)

  • cpu_clock_rate (int)

  • memory_capacity (int)

  • ram_speed (int)

  • storage_capacity (int)

  • storage_speed (int)

  • network_bandwidth (int)

  • sched_latency_ns (int)

  • sched_min_granularity_ns (int)

  • cfs_period_ns (int)

  • cost_dict (CostDict)

cost_dict: CostDict#

The cost of running this host per minute

static from_config(conf=None)[source]#
Parameters:

conf (Dict)

Return type:

dict[str, HostPrototype]

perfsim.prototypes.microservice_endpoint_function_prototype module#

class perfsim.prototypes.microservice_endpoint_function_prototype.MicroserviceEndpointFunctionPrototype(name, id, threads_instructions, threads_avg_cpi, threads_avg_cpu_usages, threads_avg_mem_accesses, threads_single_core_isolated_cache_misses, threads_single_core_isolated_cache_refs, threads_avg_cache_miss_penalty, threads_avg_blkio_rw, request_timeout, microservice_prototype=None)[source]#

Bases: object

Parameters:
  • name (str)

  • id (int)

  • threads_instructions (List[int])

  • threads_avg_cpi (List[float])

  • threads_avg_cpu_usages (List[float])

  • threads_avg_mem_accesses (List[int])

  • threads_single_core_isolated_cache_misses (List[int])

  • threads_single_core_isolated_cache_refs (List[int])

  • threads_avg_cache_miss_penalty (List[float])

  • threads_avg_blkio_rw (List[int])

  • request_timeout (float)

  • microservice_prototype (MicroservicePrototype)

add_threads(threads_instructions, threads_avg_cpi, threads_avg_cpu_usage, threads_avg_mem_accesses, threads_single_core_isolated_cache_misses, threads_single_core_isolated_cache_refs, threads_avg_cache_miss_penalty, threads_avg_blkio_rw)[source]#

perfsim.prototypes.microservice_endpoint_function_prototype_dtype module#

class perfsim.prototypes.microservice_endpoint_function_prototype_dtype.MicroserviceEndpointFunctionPrototypeDtype[source]#

Bases: TypedDict

endpoint_function_prototype_name: str#
endpoint_function_prototype: MicroserviceEndpointFunctionPrototype#

perfsim.prototypes.microservice_prototype module#

class perfsim.prototypes.microservice_prototype.MicroservicePrototype(name, endpoint_function_prototypes=None)[source]#

Bases: object

Parameters:
property endpoint_function_prototypes_dict#
add_endpoint_function_prototype(prototype)[source]#
Parameters:

prototype (MicroserviceEndpointFunctionPrototype)

remove_endpoint_function_prototype(prototype)[source]#
Parameters:

prototype (MicroserviceEndpointFunctionPrototype)

static from_config(conf=None)[source]#
Parameters:

conf (Dict)

Return type:

dict[str, MicroservicePrototype]

perfsim.prototypes.router_prototype module#

class perfsim.prototypes.router_prototype.RouterPrototype(name, latency, egress_ingress_bw, ports_count)[source]#

Bases: object

Parameters:
  • name (str)

  • latency (int)

  • egress_ingress_bw (int)

  • ports_count (int)

name: str#
latency: int#
egress_ingress_original_bw: int#
ports_count: int#
static from_config(conf=None)[source]#
Parameters:

conf (Dict)

Return type:

dict[str, RouterPrototype]

perfsim.prototypes.topology_prototype module#

class perfsim.prototypes.topology_prototype.TopologyPrototype(name, egress_err, ingress_err, incoming_graph_data=None, hosts=None, routers=None, links=None, **attr)[source]#

Bases: MultiDiGraph

Initialize a graph with edges, name, or graph attributes.

Parameters#

incoming_graph_datainput graph

Data to initialize graph. If incoming_graph_data=None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. If the corresponding optional Python packages are installed the data can also be a 2D NumPy array, a SciPy sparse array, or a PyGraphviz graph.

multigraph_inputbool or None (default None)

Note: Only used when incoming_graph_data is a dict. If True, incoming_graph_data is assumed to be a dict-of-dict-of-dict-of-dict structure keyed by node to neighbor to edge keys to edge data for multi-edges. A NetworkXError is raised if this is not the case. If False, to_networkx_graph() is used to try to determine the dict’s graph data structure as either a dict-of-dict-of-dict keyed by node to neighbor to edge data, or a dict-of-iterable keyed by node to neighbors. If None, the treatment for True is tried, but if it fails, the treatment for False is tried.

attrkeyword arguments, optional (default= no attributes)

Attributes to add to graph as key=value pairs.

See Also#

convert

Examples#

>>> G = nx.Graph()  # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> G = nx.Graph(name="my graph")
>>> e = [(1, 2), (2, 3), (3, 4)]  # list of edges
>>> G = nx.Graph(e)

Arbitrary graph attribute pairs (key=value) may be assigned

>>> G = nx.Graph(e, day="Friday")
>>> G.graph
{'day': 'Friday'}
param name:

type name:

str

param egress_err:

type egress_err:

float

param ingress_err:

type ingress_err:

float

param hosts:

type hosts:

Dict[str, Host]

param routers:

type routers:

Dict[str, Router]

param links:

type links:

Dict[str, TopologyLink]

egress_err: float#

In Kubernetes, we noticed a slight error between desired and actual ingress_bandwidths. For example, if we set ingress_bandwidth of a pod to 100Mbps, it gets slightly lower bandwidth (~95Mbps - error = 0.05 = 5%). We call this slight error as ingress_err. Use 0.05 if you want to indicate an error of 5%.

ingress_err: float#
hosts_dict: Dict[str, Host]#
routers_dict: Dict[str, Router]#

In Kubernetes, we noticed a slight error between desired and actual egress_bandwidths. For example, if we set egress_bandwidth of a pod to 100Mbps, it gets slightly lower bandwidth (~95Mbps - error = 0.05 = 5%). We call this slight error as egress_err. Use 0.05 if you want to indicate an error of 5%.

active_transmissions: Set[Transmission]#
add_equipments(hosts, routers, **attr)[source]#
Parameters:
  • hosts (Dict[str, Host])

  • routers (Dict[str, Router])

add_edges_from(ebunch_to_add, **attr)[source]#

Add all the edges in ebunch_to_add.

Parameters#

ebunch_to_addcontainer of edges

Each edge given in the container will be added to the graph. The edges can be:

  • 2-tuples (u, v) or

  • 3-tuples (u, v, d) for an edge data dict d, or

  • 3-tuples (u, v, k) for not iterable key k, or

  • 4-tuples (u, v, k, d) for an edge with data and key k

attrkeyword arguments, optional

Edge data (or labels or objects) can be assigned using keyword arguments.

Returns#

A list of edge keys assigned to the edges in ebunch.

See Also#

add_edge : add a single edge add_weighted_edges_from : convenient way to add weighted edges

Notes#

Adding the same edge twice has no effect but any edge data will be updated when each duplicate edge is added.

Edge attributes specified in an ebunch take precedence over attributes specified via keyword arguments.

Default keys are generated using the method new_edge_key(). This method can be overridden by subclassing the base class and providing a custom new_edge_key() method.

When adding edges from an iterator over the graph you are changing, a RuntimeError can be raised with message: RuntimeError: dictionary changed size during iteration. This happens when the graph’s underlying dictionary is modified during iteration. To avoid this error, evaluate the iterator into a separate object, e.g. by using list(iterator_of_edges), and pass this object to G.add_edges_from.

Examples#

>>> G = nx.Graph()  # or DiGraph, MultiGraph, MultiDiGraph, etc
>>> G.add_edges_from([(0, 1), (1, 2)])  # using a list of edge tuples
>>> e = zip(range(0, 3), range(1, 4))
>>> G.add_edges_from(e)  # Add the path graph 0-1-2-3

Associate data to edges

>>> G.add_edges_from([(1, 2), (2, 3)], weight=3)
>>> G.add_edges_from([(3, 4), (1, 4)], label="WN2898")

Evaluate an iterator over a graph if using it to modify the same graph

>>> G = nx.MultiGraph([(1, 2), (2, 3), (3, 4)])
>>> # Grow graph by one new node, adding edges to all existing nodes.
>>> # wrong way - will raise RuntimeError
>>> # G.add_edges_from(((5, n) for n in G.nodes))
>>> # right way - note that there will be no self-edge for node 5
>>> assigned_keys = G.add_edges_from(list((5, n) for n in G.nodes))
param ebunch_to_add:

type ebunch_to_add:

dict[str, TopologyLink]

Parameters:

ebunch_to_add (dict[str, TopologyLink])

add_edge(u_for_edge, v_for_edge, key=None, **attr)[source]#

Add an edge between u and v.

The nodes u and v will be automatically added if they are not already in the graph.

Edge attributes can be specified with keywords or by directly accessing the edge’s attribute dictionary. See examples below.

Parameters#

u_for_edge, v_for_edgenodes

Nodes can be, for example, strings or numbers. Nodes must be hashable (and not None) Python objects.

keyhashable identifier, optional (default=lowest unused integer)

Used to distinguish multiedges between a pair of nodes.

attrkeyword arguments, optional

Edge data (or labels or objects) can be assigned using keyword arguments.

Returns#

The edge key assigned to the edge.

See Also#

add_edges_from : add a collection of edges

Notes#

To replace/update edge data, use the optional key argument to identify a unique edge. Otherwise a new edge will be created.

NetworkX algorithms designed for weighted graphs cannot use multigraphs directly because it is not clear how to handle multiedge weights. Convert to Graph using edge attribute ‘weight’ to enable weighted graph algorithms.

Default keys are generated using the method new_edge_key(). This method can be overridden by subclassing the base class and providing a custom new_edge_key() method.

Examples#

The following all add the edge e=(1, 2) to graph G:

>>> G = nx.MultiDiGraph()
>>> e = (1, 2)
>>> key = G.add_edge(1, 2)  # explicit two-node form
>>> G.add_edge(*e)  # single edge as tuple of two nodes
1
>>> G.add_edges_from([(1, 2)])  # add edges from iterable container
[2]

Associate data to edges using keywords:

>>> key = G.add_edge(1, 2, weight=3)
>>> key = G.add_edge(1, 2, key=0, weight=4)  # update data for key=0
>>> key = G.add_edge(1, 3, weight=7, capacity=15, length=342.7)

For non-string attribute keys, use subscript notation.

>>> ekey = G.add_edge(1, 2)
>>> G[1][2][0].update({0: 5})
>>> G.edges[1, 2, 0].update({0: 5})
reinitiate_topology()[source]#
draw(show_microservices=True, save_dir=None, show=False, type='html')[source]#
Parameters:
  • show_microservices (bool)

  • save_dir (str)

  • show (bool)

  • type (str)

static recalculate_transmissions_times(transmissions)[source]#
Parameters:

transmissions (Set[Transmission])

static copy_to_dict(topology_prototypes)[source]#
Parameters:

topology_prototypes (Union[List[TopologyPrototype], Dict[str, TopologyPrototype]])

Return type:

Dict[str, TopologyPrototype]

static from_config(conf, topology_equipments_dict, link_prototypes_dict)[source]#
Parameters:

conf (dict)

Return type:

Dict[str, TopologyPrototype]

Parameters:
  • name (str)

  • egress_err (float)

  • ingress_err (float)

  • hosts (Dict[str, Host])

  • routers (Dict[str, Router])

  • links (Dict[str, TopologyLink])

perfsim.prototypes.traffic_prototype module#

class perfsim.prototypes.traffic_prototype.TrafficPrototype(name, arrival_interval_ns=1, duration=1, parallel_user=1, start_at=0)[source]#

Bases: object

TrafficPrototype is a class that holds the traffic configuration, that later on can be used to generate traffic for various service chains using a load generator.

Parameters:
  • name (str)

  • arrival_interval_ns (int)

  • duration (int)

  • parallel_user (int)

  • start_at (int)

property arrival_table#
property start_at#
property arrival_interval_ns#
property duration#
property parallel_user#
property iterations_count#

Total number of batch request arrivals

property requests_count#
recalc_iterations_count()[source]#
Return type:

int

recalc_requests_count()[source]#
Return type:

int

recalc_arrival_table()[source]#
recalc_all_properties()[source]#
static copy_to_dict(traffic_prototypes)[source]#
Parameters:

traffic_prototypes (Union[List[TrafficPrototype], Dict[str, TrafficPrototype]])

Return type:

Dict[str, TrafficPrototype]

static from_config(conf=None)[source]#
Parameters:

conf (Dict)

Return type:

dict[str, TrafficPrototype]