This is obviously a very important but somewhat difficult to explain question of maths. Let's try to define these terms for normal functions between R (real numbers) and R.
Now, if we consider the derivative of f at a certain point (let's say x), you can think of it as looking at the gradient of f at that point. So, if you're function f is constant, then we have a flat line, and so we have that it's gradient everywhere is 0, and therefore it's derivative is zero. (i.e. f'(x) = 0 for all x). If, however we have a linear function such as f(x) = 2x + 1, if we look at the graph we see that it's gradient is 2 (using the simple gradient formula), hence f'(x) = 2.
Of course we don't have to use the gradient formula every time, sometimes we won't even be able to (when the function isn't linear, that is), and there is a very helpful rule for functions of the type f(x) = x^n.
That is, f'(x) = nx^(n-1).
But it is still good to understand what the derivative really is, once you understand it's relationship with the gradient, you will already be ahead of most other A-level mathematicians. Indeed, have you ever wondered why f has a minimum at x if f'(x) = 0? Draw a picture of f and what it looks like when the gradient is 0 at a certain point, and it will all become natural.