All systems are operational

Past Incidents

10th August 2023

Hypervisors Maintenance: Hypervisors

Start of maintenance: 10.08.2023 10:00 PM CEST
End of maintenance: 10.08.2023 11:30 PM CEST

We have to announce maintenance work on all hypervisors at short notice. During the maintenance window, we will be implementing patches aimed at addressing the security vulnerability known as "Downfall".

Update: Our team identified, tested, and applied necessary patches to eliminate potential vulnerabilities. The process was executed seamlessly. Should you have any queries, feel free to reach out.

4th August 2023

Hypervisors Hypervisor Hardware Failure

At the moment one of our hypervisors is not available and therefore the virtual machines on it. We are currently still investigating the incident.

Your volume-based virtual machines are expected to be available again in a few minutes.

If you are running a virtual machine with the hypervisor's local NVME storage, we will try to recover the data as quickly as possible. At the moment we cannot make any statements about the duration or the probability of success.

16th July 2023

Ceph health Ceph Issues

We were experiencing problems in our CEPH infrastructure on 16.07.2023 starting at 4:00 AM CET, which affected all ressources running in one specific placement group. The result were slow or stuck requests. We were able to implement a fix at 10:00 PM CET, which resolved the issues, but we are still investigating the situation. We do not expect further issues. All systems are operational again. If you still experience problems, feel free to contact us via

28th June 2023

Hypervisors Hypervisor maintenance

Start of maintenance: 30.06.2023 05:00 AM CEST
End of maintenance: 30.06.2023 07:00 AM CEST

In the timeframe mentioned above we will perform an emergency maintenance due to hardware issues on one of our hypervisors. Running ressources will be stopped, evacuated and started on other hypervisors afterwards. Affected customers will be informed by mail. We want to apologize for any inconveniences. If you experience any problems, dont hesitate to contact us via