Kees Doets's definitions of logical consequence
Solution 1:
Your analysis of $\vDash$ is fine, but you've shuffled some (important) parts of the definition of $\vDash^*$. The latter definition says: For every $\mathcal A$, if all assignments in $\mathcal A$ make all the formulas in $\Sigma$ true then all assignments in $\mathcal A$ make $\phi$ true.
The crucial difference is that here the "all assignments" quantifier is applied separately to the "make all of $\Sigma$ true" assumption and the "make $\phi$ true" conclusion, whereas in $\vDash$ the "all assignments" quantifier is applied to the whole implication "if it makes $\Sigma$ true then it makes $\phi$ true."
The underlying general fact here is that quantifiers $\forall x$ cannot be distributed across implications. $\forall x\,(P(x)\to Q(x))$ is not equivalent to $(\forall x\,P(x))\to(\forall x\,Q(x))$. Example: It's true that "if all people are American then all people are left-handed" because the antecedent and consequent are both false. But it's not true that "all Americans are left-handed."
Solution 2:
I think it's best to take the question's hint and understand that the two notions are not equivalent by looking at a counter-example.
Let $\Sigma$ contain only the formula $\forall x (x + y = x)$, and let $A$ be a model of this formula: i.e., for every assignment of a member of $A$'s domain to the variable $y$, the formula is true. This is essentially to say that $x + y = x$ holds for all $x$ and for all $y$ in $A$, because we can assign any member of the domain to $y$. So it turns out that $A \models \forall y \forall x (x + y = x)$. This establishes:
$$ \Sigma \models ^{*} \forall y \forall x (x + y = x) $$
Is it then also the case that
$$ \Sigma \models \forall y \forall x (x + y = x) $$
i.e. for any model $B$ and an assignment to $y$ from the domain of $B$ which satisfies $\forall x (x + y = x)$, does $B$ also satisfy $\forall y \forall x (x + y = x)$? Try to construct a model and assignment where this isn't true. Hint: think of the natural numbers and assign $0$ to $y$.
The subtle point is that, in reasoning about $\models ^{*}$, we require that a model $A$ satisfies all of $\Sigma$ on any assignment in $A$. With plain old $\models$, we only require of a model $B$ that a particular assignment in $B$ satisfies $\Sigma$.