Console app memory usage on Linux

I’m going to write a small console app which will at this stage form an experimental P2P network.

I need to know how much memory each instance of my pretty small program will take up.

Not exactly how much memory it will use but just a generalisation. 100 KB - 500 KB or more like 2MB per process ?

Lets assume I’m using TCP sockets and standard IO, nothing more than that. It will just do standard TCP connections and log some results into a text file.

I have a powerful server with 64 GB of RAM but I would like to run 100’s of instances - perhaps more than 1000 on this server so I can test randomised transaction distribution between nodes on the P2P network I’m creating. The results of these tests will allow me to tweak the way in which I setup the seed nodes. I also have some neat ideas which I’d like to test which ensure maximum distribution of the data.

Each process will be listening on a different port on the loclahost / 127.0.0.1 IP address.

So I need to run a lot (as many as possible really) of these processes in memory at the same time for my experiment and they will all interconnect with each other using sockets to simulate a live P2P network. Note - it’s not the speed I’m testing but the message propogation, number of hops and routes taken throughout the network.

I need to gather detailed information on how many hops it takes within the network and the interconnection of ‘supernodes’ which seed the network with randomised IP’s for clients to connect to. This ‘supernode’ or ‘seed node’ is written in a different language.

So are there any real life memory usage stats in simple basic I/O with TCP/IP processes available ?

I haven’t purchased the system yet. I’m debating whether to use C or Xojo for this. But as I haven’t written any C programs for nearly 20 years I’m leaning towards purchasing and using Xojo as it looks like it’s more forgiving of errors and less of a re-learning curve.

Thanks !

I think it’s more like at least 3 MB and you have to realize it will not be a single file executable.

This is what I was thinking of, I have no idea what kind of libraries, etc will be loaded alongside the executable and how it works. For example if it loads a library once will it be used by all processes or will it as I suspect load the same library for every instance.

Even at 3MB per process on this 64 GB RAM system I will be able to execute a lot more processes than I need - the server itself is doing pretty much nothing at the moment - so I have it to myself for this project.

I just want to confirm it doesn’t load a whole load of everything available eating up 10-20MB of RAM or more for each process.

If it’s only 3MB then it won’t be a problem.

How it handles dylibs is going to depend on the OS BUT since you’re running many instances of the same executable its quite likely the system will load the dylib once & reuse it over & over for each running instance since its the same dylib for all instances.
That will lower the memory requirements a fair bit overall.
If you have 100 running processes you wont have 100 copies of each dylib required.

At least that’s what I’d expect.

Sounds likely with the dynamic libraries, it will be interesting to test this while doing my simulations on linux.

I think I’ll go ahead and make the purchase - this system will be very useful for various things I do from time to time.

Thanks for the input guys !