The Context is kind of the input to the model in addition to your prompt. Models are limited in taking context, so this number increases with the complexity of your code base (very simplified) and your prompt.
To my knowledge a big context also increases the token number on each request, so whenever possible keep your context small/relevant to your current prompt.
Tokens are the unit of the model workload, units of text, like words.