perfsim.equipments#
Module contents#
Submodules#
perfsim.equipments.core module#
- class perfsim.equipments.core.Core(cpu, core_id, core_id_in_cpu)[source]#
Bases:
Resource
,Observable
This class represents a core in a CPU. A core is a processing unit that can execute threads. It has a run queue that contains the threads that are currently assigned to it. The core can execute threads for a certain duration.
- Parameters:
cpu (
CPU
)core_id (
Union
[str
,int
])core_id_in_cpu (
int
)
- id_in_cpu: int#
The core’s unique identifier in the CPU
- runqueue: RunQueue#
The run queue that this core is currently assigned to.
perfsim.equipments.cost_dict module#
- class perfsim.equipments.cost_dict.CostDict[source]#
Bases:
TypedDict
The CostDict class is a typed dictionary that contains the costs of the different resources in the simulation.
-
cost_start_up:
Union
[int
,float
]#
-
cost_per_core_per_minute:
Union
[int
,float
]#
-
cost_per_gb_per_minute:
Union
[int
,float
]#
-
cost_best_effort_per_minute:
Union
[int
,float
]#
-
cost_extra_per_minute:
Union
[int
,float
]#
-
cost_start_up:
perfsim.equipments.cost_events_dict module#
- class perfsim.equipments.cost_events_dict.CostEventsDict[source]#
Bases:
TypedDict
The CostEventsDict class is a typed dictionary that contains the cost events of the different resources in the simulation.
-
power_on_periods:
List
[Tuple
[Union
[int
,float
],Union
[int
,float
]]]#
-
best_effort_periods:
List
[Dict
[int
,Tuple
[Union
[int
,float
],Union
[int
,float
]]]]#
-
storage_reserved_periods:
List
[Dict
[int
,Tuple
[Union
[int
,float
],Union
[int
,float
]]]]#
-
core_reserved_periods:
List
[Dict
[int
,Tuple
[Union
[int
,float
],Union
[int
,float
]]]]#
-
power_on_periods:
perfsim.equipments.cpu module#
- class perfsim.equipments.cpu.CPU(name, cores_count, clock_rate, host)[source]#
Bases:
Observable
The CPU class represents a single NUMA node in the system. It contains a number of cores and has a clock rate. It is responsible for load balancing among the cores and the threads that are assigned to them.
- Parameters:
name (
str
)cores_count (
int
)clock_rate (
int
)host (
Host
)
- sched_domain_hierarchy = ['core pairs', 'node']#
sched_domain_hierarchy similar to the Linux kernel
- cores: List[Core]#
The list of cores in this CPU
- host: Host#
The host that this CPU belongs to
- threads_sorted: SortedDict[int, SortedDict[float, set[ReplicaThread]]]#
Sorted dictionary of all threads (by load) belongs to this CPU (not reliable, only for emergency load balancing)
- pairs_sorted: SortedDict[int, Set[int]]#
Sorted dictionary of all pairs (by load) belongs to this CPU (not reliable, only for load balancing)
- idle_core_pair_ids: Dict[int, SortedSet[int]]#
Stores the id of cores in pairs that are idle (the key is pair_id in CPU and values are idle cores)
- idle_pair_ids: SortedSet[int]#
Stores the id of idle pairs (sorted by the pair id)
- idle_core_ids: SortedSet[int]#
Stores the id of idle cores (sorted by the core id)
- pairs_load: List[int]#
Stores the load of each pair (sorted by the load)
- register_events()[source]#
This is for performance optimization purposes. Instead of generating strings for each event, we can register the event_names as attributes, then we send the event_name as reference, instead of copying the string every time. This (slightly) improves performance, specially because we are calling the notify_observers method several times for each request during the simulation.
- Returns:
- get_available()[source]#
Returns available cores in the CPU
- Returns:
Returns available cores in the CPU
- Return type:
int
- property capacity#
Returns the total capacity of the CPU in terms of CPU requests (each core is 1000 milicores).
- Returns:
- is_there_enough_resources_to_reserve(amount)[source]#
Check if there are enough resources to reserve amount of CPU in the CPU.
- Parameters:
amount (
int
)- Return type:
bool
- Returns:
- reserve(amount)[source]#
Uniformly reserve a given amount of CPU within all the cores in the CPU.
- Parameters:
amount (
int
)- Return type:
None
- release(amount)[source]#
Uniformly release a given amount of CPU within all the cores in the CPU.
- Parameters:
amount (
int
)- Return type:
None
- Returns:
- get_idle_core_in_sd(sd_name, sd, numa_node_id, current_core_in_sd)[source]#
Get the idle core in the given sched domain
- Parameters:
sd_name (
str
)sd (
List
)numa_node_id (
int
)current_core_in_sd (
Core
)
- Return type:
int
- Returns:
- get_the_other_core_in_pair(core_id, return_same_if_not_exists=False)[source]#
Get the other core in the pair
- Parameters:
core_id (
int
)return_same_if_not_exists (
bool
)
- Return type:
Optional
[int
]- Returns:
- get_busiest_core_in_pair_by_core_id(core_id)[source]#
Get the busiest core in the pair
- Parameters:
core_id
- Return type:
Optional
[Core
]- Returns:
- get_busiest_core_in_pair(pair_id)[source]#
Get the busiest core in the pair
- Parameters:
pair_id
- Return type:
Optional
[Core
]- Returns:
- get_busiest_core_in_busiest_pair(current_pair_id, numa_node_id=0)[source]#
Get the busiest core in the busiest pair
- Parameters:
current_pair_id
numa_node_id (
int
)
- Return type:
Optional
[Core
]- Returns:
- load_balance_threads_among_runqueues()[source]#
Load balance threads among runqueues
- Return type:
List
[List
[RunQueue
]]- Returns:
- emergency_load_balance_idle_cores()[source]#
Emergency load balance idle cores
- Return type:
None
- Returns:
Recalculate CPU requests shares
- Return type:
None
- Returns:
- plot(save_dir=None, show=True)[source]#
Plot the CPU requests share and threads count on run queues
- Parameters:
save_dir (
str
)show (
bool
)
- Returns:
- load_balance()[source]#
Load balance the CPU by balancing the threads among the runqueues :rtype:
None
:return:- Return type:
None
- add_to_pairs_sorted(pair_id, inverted_pair_load)[source]#
Add to pairs sorted dictionary :type pair_id: :param pair_id: :type inverted_pair_load: :param inverted_pair_load: :return:
- add_to_threads_sorted(thread, inverted_thread_load=None)[source]#
Add to threads sorted dictionary :type thread:
ReplicaThread
:param thread: :type inverted_thread_load:int
:param inverted_thread_load: :return:- Parameters:
thread (ReplicaThread)
inverted_thread_load (int)
- remove_from_pairs_sorted(pair_id, inverted_pair_load)[source]#
Remove from pairs sorted dictionary
- Parameters:
pair_id
inverted_pair_load
- Returns:
- remove_from_threads_sorted(thread, inverted_thread_load=None)[source]#
Remove from threads sorted dictionary
- Parameters:
thread (
ReplicaThread
)inverted_thread_load (
int
)
- Returns:
- update_idle_pairs(core)[source]#
Update idle pairs in the CPU by checking the load of the cores in the pair
- Parameters:
core
- Returns:
- property clock_rate: int#
Get the clock rate of the CPU in Hertz :return:
- property clock_rate_in_nanohertz: float#
Get the clock rate of the CPU in nanohertz
- Returns:
perfsim.equipments.equipment module#
perfsim.equipments.host module#
- class perfsim.equipments.host.Host(name, cpu_core_count, cpu_clock_rate, memory_capacity, ram_speed, storage_capacity, storage_speed, network_bandwidth, router=None, cluster=None, sched_latency_ns=6, sched_min_granularity_ns=2, cfs_period_ns=100, cost_dict=None)[source]#
Bases:
HostPrototype
,Equipment
A Host object simulates a single host in a cluster. It has a single CPU and a single NIC. The Number of cores in its CPU can be specified using the cores_count property, and its maximum network bandwidth can be specified with the max_bandwidth property.
- Parameters:
-
cost_events:
CostEventsDict
# The cost events of this host
- classmethod from_host_prototype(name, host_prototype, cluster=None, router=None)[source]#
Create a host from a host prototype object and assign it to a cluster and a router
- Parameters:
name (
str
)host_prototype (
HostPrototype
)cluster (
Cluster
)router (
Router
)
- Returns:
- is_replica_placeable_on_host_from_resource_perspective(replica)[source]#
Check if a replica can be placed on this host from a resource perspective (CPU, RAM, BLKIO, NIC)
- Parameters:
replica (
MicroserviceReplica
)- Return type:
bool
- Returns:
- place_replica(replica)[source]#
Place a replica on this host and reserve the necessary resources (CPU, RAM, BLKIO, NIC)
- Parameters:
replica (
MicroserviceReplica
)- Return type:
None
- Returns:
- evict_replica(replica)[source]#
Evict a replica from this host and release the reserved resources (CPU, RAM, BLKIO, NIC)
- Parameters:
replica (
MicroserviceReplica
)- Return type:
None
- Returns:
- is_active()[source]#
Check if the host is active (i.e., has at least one thread running)
- Return type:
bool
- Returns:
- static generate_random_instances(cluster, host_count, core_count=8, cpu_clock_rate=3400000000, memory_capacity=17179869184, ram_speed=2675787694, storage_capacity=1000, storage_speed=10694999.999999998, network_bandwidth=12500000, name_index_starts_from=0)[source]#
Generate random instances of hosts
- property threads: Set[ReplicaThread]#
Get the threads running on the host (if any)
- Returns:
- static from_config(conf=None, host_prototypes_dict=None)[source]#
Create hosts from a configuration dictionary and a dictionary of host prototypes
- Parameters:
conf (
Dict
)host_prototypes_dict (
dict
[str
,HostPrototype
])
- Return type:
dict
[str
,Host
]- Returns:
perfsim.equipments.nic module#
- class perfsim.equipments.nic.Nic(name, bandwidth, equipment)[source]#
Bases:
object
This class represents a Network Interface Card (NIC) in a host or a router. A NIC is a hardware component that connects the host or router to the network. It has a bandwidth that determines the maximum amount of data that can be transmitted over the network. The NIC can reserve and release bandwidth for transmissions.
-
name:
str
# A name for the NIC
-
equipment:
Union
[Host
,Router
]# The parent equipment object. E.g., the host or router object that this NIC belongs to it
-
bandwidth:
int
# Nic’s bandwidth (Bps)
-
transmissions:
Dict
[int
,Request
]# A dictionary consisting of all active transmissions, that can be accessed by tuple (subchain_id, request) as keys
-
bandwidth_requests_total:
int
# The total bandwidth requests on this NIC. Useful for scoring hosts.
- reserve_transmission_for_request(request, subchain_id, src_replica, source_node, destination_replica, destination_node)[source]#
- Parameters:
request (
Request
)subchain_id (
int
)src_replica (
MicroserviceReplica
)source_node (
Tuple
[int
,MicroserviceEndpointFunction
])destination_replica (
MicroserviceReplica
)destination_node (
Tuple
[int
,MicroserviceEndpointFunction
])
- release_transmission_for_request(request, subchain_id)[source]#
- Parameters:
request (
Request
)subchain_id (
int
)
- reserve_transmission_in_nic(payload_size, src_replica, destination_replica)[source]#
- Parameters:
payload_size (
float
)src_replica (
MicroserviceReplica
)destination_replica (
MicroserviceReplica
)
- calculate_transmission_time(payload_size, src_replica, destination_replica)[source]#
This method calculates the transmission time between two replicas. It calculates the transmission time based on the minimum bandwidth between the source and destination replicas, the minimum bandwidth between the source and destination hosts, and the minimum bandwidth between the source and destination NICs. The transmission time is calculated as the time it takes to transmit the payload over the minimum bandwidth.
- Parameters:
payload_size (
float
)src_replica (
MicroserviceReplica
)destination_replica (
MicroserviceReplica
)
- Return type:
Union
[int
,float
]- Returns:
- dismiss_bw(bandwidth_request)[source]#
This method is used to dismiss the bandwidth request from the NIC. It is used when the transmission is finished.
- Parameters:
bandwidth_request
- Returns:
-
name:
perfsim.equipments.ram_set module#
- class perfsim.equipments.ram_set.RamSet(ram_set_id, capacity, speed, host)[source]#
Bases:
Resource
This class represents a set of RAM modules in a host. A RAM set is a collection of RAM modules that can be used to store data. It has a capacity and a speed at which it can transfer data.
- Parameters:
ram_set_id (
str
)capacity (
float
)speed (
float
)host (
Host
)
perfsim.equipments.resource module#
- class perfsim.equipments.resource.Resource(type, name, throttleable, unit_of_measure, capacity)[source]#
Bases:
object
The Resource class is the base class for all the resources in a Host.
- Parameters:
type (
str
)name (
str
)throttleable (
bool
)unit_of_measure (
str
)capacity (
Union
[int
,float
])
-
type:
str
# The type of the resource.
-
name:
str
# The name of the resource.
-
throttleable:
bool
# Whether the resource can get throttled or not.
-
unit_of_measure:
str
# The unit in which the resource is measured.
-
capacity:
Union
[int
,float
]# The maximum capacity of the resource.
- property reserved#
Get the reserved capacity of the resource.
- Returns:
- is_there_enough_resources_to_reserve(amount)[source]#
Check if there are enough resources to reserve.
- Parameters:
amount (
int
)- Return type:
bool
- Returns:
- reserve(amount)[source]#
Reserve the given amount of resources.
- Parameters:
amount (
int
)- Return type:
None
- Returns:
perfsim.equipments.router module#
- class perfsim.equipments.router.Router(name, latency, egress_ingress_bw, ports_count, cluster)[source]#
Bases:
RouterPrototype
,Equipment
This class represents a router in a network. A router is a device that forwards data packets between computer networks. It has a latency and a bandwidth that it can support.
- Parameters:
name (
str
)latency (
int
)egress_ingress_bw (
int
)ports_count (
int
)cluster (
Cluster
)
-
hosts:
dict
[Host
,int
]# The dictionary of hosts that are connected to this router. The key is the host object and the value is the port number on the router that the host is connected to.
-
routers:
dict
[Router
,int
]# The dictionary of routers that are connected to this router. The key is the router object and the value is the port number on the router that the router is connected to.
-
nics:
dict
[int
,dict
[str
,Nic
]]# The dictionary of NICs that are connected to this router. The key is the port number on the router and the value is a dictionary with the keys “egress” and “ingress” and the values are the NIC objects.
- connect_router(router, connect_other_pair=True)[source]#
Connect this router to another router.
- Parameters:
router (
Router
)connect_other_pair (
bool
)
- Returns:
- disconnect_router(router, suppress_error=False, disconnect_other_pair=True)[source]#
Disconnect this router from another router.
- Parameters:
router (
Router
)suppress_error (
bool
)disconnect_other_pair (
bool
)
- Returns:
- disconnect_host(host, suppress_error=False)[source]#
Disconnect this router from a host.
- Parameters:
host (
Host
)suppress_error (
bool
)
- Returns:
- classmethod from_router_prototype(name, router_prototype, cluster=None)[source]#
Create a router from a router prototype.
- Parameters:
name (
str
)router_prototype (
RouterPrototype
)cluster (
Cluster
)
- Returns:
- static from_config(conf=None, router_prototypes_dict=None)[source]#
Create routers from a configuration.
- Parameters:
conf (
Dict
)router_prototypes_dict (
Dict
[str
,RouterPrototype
])
- Return type:
dict
[str
,Router
]- Returns:
perfsim.equipments.run_queue module#
- class perfsim.equipments.run_queue.RunQueue(core)[source]#
Bases:
object
RunQueue is a queue of ReplicaThreads.
- Parameters:
core (
Core
)
-
rq:
list
[ReplicaThread
]# List of ReplicaThreads
-
lightest_threads_in_rq:
SortedDict
[int
,SortedDict
[float
,set
[ReplicaThread
]]]#
-
active_threads:
set
[ReplicaThread
]# The list of active ReplicaThreads
-
best_effort_active_threads:
ThreadSet
[ReplicaThread
]# All active best effort threads
-
guaranteed_active_threads:
ThreadSet
[ReplicaThread
]# All active guaranteed threads
-
burstable_active_threads:
ThreadSet
[ReplicaThread
]# All active burstable threads
-
burstable_unlimited_active_threads:
ThreadSet
[ReplicaThread
]# All active burstable threads that do not have a limit
-
burstable_limited_active_threads:
ThreadSet
[ReplicaThread
]# All active burstable threads that have a limit
- requeue_task(thread)[source]#
Requeue a thread in the run queue.
- Parameters:
thread (
ReplicaThread
)- Return type:
None
- Returns:
Recalculate the CPU requests shares.
- Return type:
None
- Returns:
- run_idle(duration)[source]#
Run the idle threads for the given duration.
- Parameters:
duration (
int
)- Return type:
None
- Returns:
Assign the CPU requests share to the thread.
- Parameters:
thread (
ReplicaThread
)cpu_requests (
float
)
- Return type:
None
- Returns:
- categorize_thread_into_sets(thread)[source]#
Categorize the thread into the sets, i.e., best effort, guaranteed, burstable, burstable unlimited, and burstable limited.
- Parameters:
thread (
ReplicaThread
)- Return type:
None
- Returns:
- decategorize_thread_from_sets(thread)[source]#
Decategorize the thread from the sets, i.e., best effort, guaranteed, burstable, burstable unlimited, and burstable limited.
- Parameters:
thread (
ReplicaThread
)- Return type:
None
- Returns:
- enqueue_task(thread, load_balance=False)[source]#
Enqueue a thread in the run queue.
- Parameters:
thread (
ReplicaThread
)load_balance (
bool
)
- Return type:
None
- Returns:
- enqueue_tasks(threads, load_balance=False)[source]#
Enqueue a list of threads in the run queue at once.
- Parameters:
threads (
List
[ReplicaThread
])load_balance (
bool
)
- Return type:
None
- Returns:
- remove_from_lightest_threads_in_rq(thread)[source]#
Remove the thread from the lightest threads in the run queue.
- Parameters:
thread
- Returns:
- add_to_lightest_threads_in_rq(thread)[source]#
Add the thread to the lightest threads in the run queue.
- Parameters:
thread (
ReplicaThread
)- Returns:
- dequeue_task_by_thread(thread, load_balance=False)[source]#
Dequeue a thread from the run queue by the thread.
- Parameters:
thread (
ReplicaThread
)load_balance (
bool
)
- Return type:
None
- Returns:
- dequeue_task_by_thread_index(thread, load_balance=False)[source]#
Dequeue a thread from the run queue by the thread index in the run queue.
- Parameters:
thread (
int
)load_balance (
bool
)
- Return type:
- Returns:
- property load: int#
Get the load of the run queue.
- Returns:
perfsim.equipments.storage module#
- class perfsim.equipments.storage.Storage(storage_id, capacity, speed, host)[source]#
Bases:
Resource
This class represents a storage device in a host. A storage device is a device that stores data. It has a capacity and a speed at which it can transfer data.
- Parameters:
storage_id (
str
)capacity (
float
)speed (
float
)host (
Host
)
perfsim.equipments.topology module#
- class perfsim.equipments.topology.Topology(name, simulation, egress_err, ingress_err, incoming_graph_data=None, hosts=None, routers=None, links=None, copy=False, **attr)[source]#
Bases:
TopologyPrototype
,Observable
Initialize a graph with edges, name, or graph attributes.
Parameters#
- incoming_graph_datainput graph
Data to initialize graph. If incoming_graph_data=None (default) an empty graph is created. The data can be an edge list, or any NetworkX graph object. If the corresponding optional Python packages are installed the data can also be a 2D NumPy array, a SciPy sparse array, or a PyGraphviz graph.
- multigraph_inputbool or None (default None)
Note: Only used when incoming_graph_data is a dict. If True, incoming_graph_data is assumed to be a dict-of-dict-of-dict-of-dict structure keyed by node to neighbor to edge keys to edge data for multi-edges. A NetworkXError is raised if this is not the case. If False,
to_networkx_graph()
is used to try to determine the dict’s graph data structure as either a dict-of-dict-of-dict keyed by node to neighbor to edge data, or a dict-of-iterable keyed by node to neighbors. If None, the treatment for True is tried, but if it fails, the treatment for False is tried.- attrkeyword arguments, optional (default= no attributes)
Attributes to add to graph as key=value pairs.
See Also#
convert
Examples#
>>> G = nx.Graph() # or DiGraph, MultiGraph, MultiDiGraph, etc >>> G = nx.Graph(name="my graph") >>> e = [(1, 2), (2, 3), (3, 4)] # list of edges >>> G = nx.Graph(e)
Arbitrary graph attribute pairs (key=value) may be assigned
>>> G = nx.Graph(e, day="Friday") >>> G.graph {'day': 'Friday'}
- param name:
- type name:
str
- param simulation:
- type simulation:
- param egress_err:
- type egress_err:
float
- param ingress_err:
- type ingress_err:
float
- param hosts:
- type hosts:
Dict
[str
,Host
]- param routers:
- type routers:
Dict
[str
,Router
]- param links:
- type links:
Dict
[str
,TopologyLink
]- param copy:
- type copy:
bool
- before_recalculate_transmissions_bw_on_all_links: str#
The dictionary of hosts that are part of the topology. The key is the host name and the value is the host object.
- recalculate_transmissions_bw_on_all_links()[source]#
Recalculate the bandwidth of all the transmissions on all the links in the topology.
- Returns:
- recalculate_transmissions_portion_of_bandwidth_on_link(link)[source]#
Recalculate the bandwidth portion of all the transmissions on the given link.
- Parameters:
link
- Returns:
- classmethod from_prototype(simulation, prototype, copy=False)[source]#
Create a new topology from a prototype.
- Parameters:
simulation (
Simulation
)prototype (
TopologyPrototype
)copy (
bool
)
- Returns:
- Parameters:
name (str)
simulation (Simulation)
egress_err (float)
ingress_err (float)
hosts (Dict[str, Host])
routers (Dict[str, Router])
links (Dict[str, TopologyLink])
copy (bool)
perfsim.equipments.topology_link module#
- class perfsim.equipments.topology_link.TopologyLink(name, latency, source, destination)[source]#
Bases:
TopologyLinkPrototype
This class represents a link between two nodes in a network topology. It has a latency and a bandwidth.
- classmethod from_prototype(name, prototype, src, dest)[source]#
Create a new TopologyLink object from a TopologyLinkPrototype object.
- Parameters:
name (
str
)prototype (
TopologyLinkPrototype
)
- Returns:
- static to_dict(links_list)[source]#
Convert a list of TopologyLink objects to a dictionary where the key is the name of the link and the value is the link object.
- Parameters:
links_list (
list
[TopologyLink
])- Return type:
dict
[str
,TopologyLink
]- Returns: