(Context is only for a Binary classification problem using a Booster model)
The reference docs state:

seed
[default=0] Random number seed. This parameter is ignored in R package, use set.seed() instead.

seed_per_iteration
[default=false
] Seed PRNG determnisticly via iterator number.
These params are subsequently used in the learning API to train a Booster. I am familiar with all the stochastic behaviour that can be induced in a GBT algorithm and I am not looking for any further theoretical information on this topic.
My question is how this two seed related params are modifying the stochastic vs deterministic behaviour of the algorithm and that can be induced through dataset sampling and/or feature subsampling? What is the difference between both parameters?
Can completely deterministic behaviour be actually induced in this implementation of GBT for a context of a binary classification task given that input data remains the same?
Thanks a lot