Interesting but that's not what I'm getting at all from gemma and phi on ollama.
LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Then again on a second attempt I get wildly different resutls, for both of them. Might be a matter of advanced settings, like temperature, but single examples don't seem to be indicative of one being better than the other at this type of Qs.
I find gemma to be too censored, I'm not using it until someone releases a cleaned up version.
Phi on the other hand outputs crazy stuff too often for my liking. maybe I need to tune some inference parameters.