Here's how it works.
Rendering a planetary surface the simple way (what we do now):
Get some heightfield data, project it to a sphere, store it in a texture. Apply texture to sphere. Render sphere with a (fairly complex) shader to display terrain, liquid level, molten effects, climate results, impacts, materials, transitions... done!
Rendering a planetary surface the procedural way (like Space Engine):
Cut a sphere into parts, trying to get the geometry to be as high resolution as possible for whatever part of the sphere is in view depending on what the camera is looking at. Run an algorithm for as many height samples as possible to get a detailed height field. Displace geometry (on CPU, GPU, or both) to extrude mountains, valleys and other terrain features. For US², on top of all that, still render liquid level, molten effects, climate results, impacts, materials, transitions... phew, done!
The difference: the simple case doesn't have to do anything costly with the sphere or heightfield data. The complex case needs to generate view dependent geometry (semi expensive) and generate height data (quite expensive) in addition to everything that the simple case does.
We will be able to do that at some point. It's just expensive (both in terms of development cost as well as in terms of CPU and GPU processing time). Space Engine doesn't focus on simulation nearly as much as we do - in US², your hardware is already hard at work simulating things (gravity, collisions, composition, stellar evolution, and so much more...). That means that our time budget for visuals is more limited than it would be for an application focusing mainly on visuals.
Hope I could clear that up,
- George