I’m going to write a small console app which will at this stage form an experimental P2P network.
I need to know how much memory each instance of my pretty small program will take up.
Not exactly how much memory it will use but just a generalisation. 100 KB - 500 KB or more like 2MB per process ?
Lets assume I’m using TCP sockets and standard IO, nothing more than that. It will just do standard TCP connections and log some results into a text file.
I have a powerful server with 64 GB of RAM but I would like to run 100’s of instances - perhaps more than 1000 on this server so I can test randomised transaction distribution between nodes on the P2P network I’m creating. The results of these tests will allow me to tweak the way in which I setup the seed nodes. I also have some neat ideas which I’d like to test which ensure maximum distribution of the data.
Each process will be listening on a different port on the loclahost / 127.0.0.1 IP address.
So I need to run a lot (as many as possible really) of these processes in memory at the same time for my experiment and they will all interconnect with each other using sockets to simulate a live P2P network. Note - it’s not the speed I’m testing but the message propogation, number of hops and routes taken throughout the network.
I need to gather detailed information on how many hops it takes within the network and the interconnection of ‘supernodes’ which seed the network with randomised IP’s for clients to connect to. This ‘supernode’ or ‘seed node’ is written in a different language.
So are there any real life memory usage stats in simple basic I/O with TCP/IP processes available ?
I haven’t purchased the system yet. I’m debating whether to use C or Xojo for this. But as I haven’t written any C programs for nearly 20 years I’m leaning towards purchasing and using Xojo as it looks like it’s more forgiving of errors and less of a re-learning curve.