Currently not supported in llama.cpp, if someone has the time to check out OpenELM it probably will be implemented.