Wiki source code of Announcements

Version 231.1 by hbpadmin on 2023/12/20 18:47

Show last authors
1 === **Migration of IAM service (2023-12-20) (Resolved)** ===
2
3 The Collaboratory IAM service is being migrated from the CSCS instance to a new JSC instance. As such, IAM will be down during this maintenance window. All services relying on IAM will be unavailable during this time frame. 
4 \\The migration went well and all services should now be restored.
5
6 === **Unable to upload files to the Bucket (2023-11-20)(Resolved)** ===
7
8 Bucket uploads are currently experiencing issues. During our investigationg it seems to be an issue with uploading the files to the CSCS Swift storage container that is preventing our system from working. We have raised the issue with CSCS engineers and hope to resolve this issue soon.
9 \\Resolution: The issue was to do with the quota allocated to our Swift storage. The issue was resolved by the CSCS team.
10
11 === **Maintenance affecting most EBRAINS services (2023-10-25) (Resolved)** ===
12
13 On Oct 25 2023, the CSCS site is performing site-wide maintenance which will also affect their network and/or the OpenStack servers. All of the EBRAINS core services will be shutdown in advance of this maintenance. This includes the EBRAINS Collaboratory IAM service which provides single sign-on to all EBRAINS services. A few EBRAINS services not requiring user authentication and not hosted at CSCS might remain active, most significantly the EBRAINS web portal. Service providers are responsible for stopping and restarting their services.
14
15 The services will remain down for several hours. EBRAINS AISBL will restart the core services it is responsible for as soon as the green light is given by CSCS. This is expected to be around 12:00 CEST.
16
17 UPDATE #1:
18
19 The maintenance on OpenStack and the network at CSCS has been completed shortly before noon. The EBRAINS AISBL team has restarted all the production services it is responsible for. Service providers are invited to reboot/check the status of their services. The supercomputing resources at CSCS are still stopped for maintenance until later this afternoon.
20
21 One of the NGINX proxies of the OKD had issues restarting. Those have been resolved. OKD and the Collaboratory Lab are up again.
22
23 Users can [[open support tickets>>https://ebrains.eu/support]] if they identify any problem on the platform.
24
25 (% style="color:#3498db" %)UPDATE #2:
26
27 (% style="color:#3498db" %)All production services and most integration services should be running again. OKD INT still has issues with one of its VMs. CSCS is working on recovering it.
28
29 (% style="color:#3498db" %)UPDATE #3:
30
31 (% style="color:#3498db" %)All production and integration services are running again.
32
33 === **Collaboratory IAM issue (2023-10-23) (Resolved)** ===
34
35 (% class="wikigeneratedid" %)
36 The IAM service was recently containerized. That Docker container uses an additional SSL certificate. This certificate expired and its automatic renewal was not yet functional. The certificate has been manually updated and its automatic renewal has been prioritized.
37
38 === **Collaboratory Drive issue (2023-10-13) (Resolved)** ===
39
40 An issue with the upgrade of the OS on the Drive server encountered an error and is preventing the Drive from functioning correctly. We are working to resolve this as soon as possible and will update this page with more information when we can.
41
42
43 **UPDATE**: The upgrade of the OS has been completed and access to the Drive has been restored.
44
45 === **Collaboratory IAM issue (2023-09-08)** ===
46
47 An issue was discovered after the upgrade of IAM which prevents users from accessing their account page. As such, the version of keycloak is being rolled back to the version in use before the release. We apologise for any inconvenience this has caused and will perform the update at a later stage.
48
49 === **Collaboratory Drive issue (2023-08-31)** ===
50
51 Renaming an Office file in the Collaboratory Drive while that file is open in an Office session for collaborative editing can cause data loss. Before renaming a file, users should:
52
53 * open the file in Office
54 * wait to be the only user in the Office session (see the list of users at top right)
55 * click on the disk icon to force a save (icon is at top left)
56 * in the Drive, rename the file checking no user enters the Office session
57 * close the Office session checking no user enters the Office session
58
59 We are looking into how this issue can be fixed.
60
61 === **Collaboratory Lab issue (2023-08-22) (Resolved)** ===
62
63 The Collaboratory Lab had an issue with its instance running at CSCS. EBRAINS users are invited to select the JSC server instead when starting up the Lab.
64
65 The Lab was restored at CSCS. The issue was due to a storage problem that affected one of the key services that participate in providing the Lab service.
66
67 === **Collaboratory Office issue (2023-08-21) (Resolved)** ===
68
69 The Collaboratory Office service had an issue this morning. It seems the issue is caused by an automatic renewal of an SSL certificate that failed. We are looking to recover the service ASAP.
70
71 Please note that Collaboratory Office uses the open standard docx, xlsx, and pptx formats. The files can be downloaded to be read and possibly edited in MS Office and other clone alternatives. Users editing documents online should coordinate with others to avoid forking the document into multiple branches due to editing outside of a collaborative edition service.
72
73 === **Collaboratory issue (2023-07-04)** ===
74
75 (% class="wikigeneratedid" %)
76 Some users may experience slow response times when accessing a service the first time in the day. We are working on resolving this ASAP. 
77 \\**Update 2023-07-10: **The issue has been identified. We are working to resolve this and, in the meantime, we will be performing some maintenance to help alleviate the issue. A maintenance window will be set for tomorrow.
78
79 (% class="wikigeneratedid" %)
80 **Update 2023-07-11: **Maintenance was performed to increase performance while we continue working on the underlying issue. 
81 \\Due to the previous maintenance, we believe the issue has been alleviated. Although investigation will continue to prevent this issue from re-occurring, users should not get the timeout issue anymore. If you do, please contact support at [[https:~~/~~/ebrains.eu/support>>https://ebrains.eu/support]], thank you.
82
83 (% class="wikigeneratedid" %)
84 This section will continue to be updated as we have more information.
85
86 === **Collaboratory Lab issue (2023-05-31) (Resolved)** ===
87
88 This morning, the Lab at CSCS went down due to an internal JupyterHub issue. This issue was resolved following a re-deployment of the Collaboratory Lab to CSCS.
89 \\We will continue investigating the root cause of this issue.
90
91 === **Collaboratory Drive issue this morning (2023-05-08) (resolved)** ===
92
93 This morning, the Collaboratory Drive server encountered an out of memory issue. The issue has been resolved. This should not have affected any data.
94
95 === **Collaboratory Drive & Lab work (2023-04-09)** ===
96
97 We are announcing a down time for the Collaboratory Drive this coming Sunday April 9 from noon CEST lasting potentially all day. --This will allow us to activate a secondary Drive server, mirror copy of the primary server, which we intend to use for backups only.--
98
99 During the coming days, we will also be tentatively deploying and exercising new policies in the Drive.
100
101 1. We will limit or disable the creation of [[core dumps>>https://en.wikipedia.org/wiki/Core_dump]] in the Drive by the Lab. An important number of large core dumps have recently been filling up the Drive, which has caused the disk full situation earlier this week much faster than would have happened otherwise.
102 1. We will limit the maximum file size that can be generated by the Lab, both in the Drive and in the local file system of the user's Lab container. This file size will be aligned with the maximum upload size limit to the Drive which is currently 1 GB. The reasons for this are:
103 11. In the Drive: the alignment of the policy of upload of files and Lab creation of files. The Drive is not intended for very large files. Collabs have a Bucket storage which is optimized for this purpose.
104 11. In the local file storage of Lab containers: each Lab container is only accessible to a single user, but the storage resource is shared at the OpenShift level by all the pods running on a same worker node. A Lab container therefore has the possibility of causing a disk full condition on the other users' containers. The file size limit will reduce this risk. Users are reminded that stress testing any resources provided on the EBRAINS RI is strictly forbidden without the explicit authorization from EBRAINS Technical Coordination.
105 1. We will "move" (copy and delete) files larger than the 1 GB limit from the Drive to the Bucket of the same collab. Files that are "moved" will be deleted from the Drive and a file with the same name extended with "_moved.txt" will be put in its place. This file will inform users to check the Bucket for their file. Example: the file **//MyTalk.mp4//** will be deleted and replaced by a file //**MyTalk.mp4_moved.txt**.//
106
107 The new policies are not technically releases but they will be announced in The Collaboratory [[Releases>>doc:Collabs.the-collaboratory.Releases.WebHome]] page.
108
109 Check this space for the announcement of the end of the work.
110
111 === **Collaboratory Drive issue (2023-04-04) (Resolved)** ===
112
113 Some users are still experiencing issues while trying to access the Drive. We are investigating the issue and will update this announcement as soon as we have more information.
114
115 The issues on the drive have been resolved.
116
117 We are still looking to recover files that were lost due to the disk full condition earlier this week. It appears that there were no files lost that were older than Thursday March 30. If you identify any such file, please contact EBRAINS Support.
118
119 Update: We have looked into all our options for recovering lost data from the event earlier this week. We're sorry to have to announce that no more data will be recoverable. We are however going to reattempt a continuous backup strategy to limit the risk and potential extent of data loss.
120
121 === **Collaboratory Drive repair (2023-04-03) (Resolved)** ===
122
123 (% class="wikigeneratedid" %)
124 After resizing the Drive to address the issue mentioned in the previous announcement, we noticed that the drives of specific collabs have entered into an unstable state. Users may experience issues while trying to access these drives. We are currently repairing these issues and this page will be updated once fully repaired.
125
126 (% class="wikigeneratedid" %)
127 The Drives of problematic collabs have been repaired. The repair process has unfortunately not been able to recover a few files. We are still looking whether recovery is possible.
128
129 === **Collaboratory Drive has filled up (2023-04-03) (Resolved)** ===
130
131 (% class="wikigeneratedid" %)
132 We monitor the Drive usage to make sure that there is always enough space for users. It seems that a user has bypassed our limitations and created one or more extremely large files in the Drive filling up the Drive storage space. We are in the processing of adding space. We will also be looking to identify the user that filled up the Drive.
133
134 (% class="wikigeneratedid" %)
135 We have added additional disk space to the Drive. It seems the Drive was filled with a large number of core dumps generated over the past days. The reports we received suggested the Drive had enough spare space to run 2 more weeks. we are investigating that issue.
136
137 (% class="wikigeneratedid" %)
138 We will look into limiting the capability of generating core dumps in the Lab and automatically deleting core dumps after a limited time (e.g. a week). This will be discussed in the TC Weekly call on April 4 at 15:00 CET.
139
140 === **Openshift is down at CSCS (2023-03-23) (Resolved)** ===
141
142 The Openshift service at CSCS has failed overnight. We are working to identify the issue and recover the service ASAP.
143
144 The services affected are all those running on the OpenShift service at CSCS including:
145
146 * Collaboratory Lab at CSCS (users can use the Lab running at JSC by visiting [[lab.de.ebrains.eu>>https://lab.de.ebrains.eu]]),
147 * atlas viewers,
148 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
149
150 The Openshift server has been restarted.
151
152 === **Collaboratory Lab at JSC is down (2023-03-08) (Resolved)** ===
153
154 The Collaboratory Lab servers are currently down at JSC. The servers were initially down due to a power outage. However, power has been restored to JSC but the servers have not yet been restored. More information to be provided soon. In the meantime, Lab users still have access to the servers at CSCS.
155
156 Update: The servers at JSC have been restored.
157
158 === **Maintenance to Openshift (2023-02-21) (Resolved)** ===
159
160 Services that use the NFS filesystem on Openshift will be down today from 17:15 CET for up to 30 minutes. This includes the Collaboratory Lab.
161
162 === **Collaboratory Office changing Docker image (2023-01-31) (Resolved)** ===
163
164 We are operating a change of the Docker image used for the Office service. A change of the image was operated last week when we had a license issue.
165
166 === **Collaboratory Office licensing issue (2023-01-24) (Resolved)** ===
167
168 (% class="wikigeneratedid" %)
169 The Collaboratory Office service uses the OnlyOffice software. The license file that we were using has expired. This greatly restricts the number of users that can use the service in edit mode simultaneously.
170
171 (% class="wikigeneratedid" %)
172 Our team has been trying to contact OnlyOffice to resolve the issue as quickly as possible. Responsiveness at OnlyOffice this week has been challenging.
173
174 (% class="wikigeneratedid" %)
175 In the meantime, users can download - edit - upload files but obviously this does not allow simultaneous edits by multiple users. Please check the history of files to make sure that you do not overwrite someone else's edits when uploading  your updated file.
176
177 (% class="wikigeneratedid" %)
178 **Resolved**: it seems the issue has been resolved
179
180 === **Collaboratory Lab at JSC is not working (2022-12-13) (Resolved)** ===
181
182 The JSC cloud infrastructure is experiencing issues which cause services running there to become unreliable. As such, the Lab instance at JSC is not usable. We are in contact with the JSC team and monitoring the situation. In the meantime, please select the CSCS site when starting a new Lab session.
183
184 **Resolved**: This issue was resolved by rebooting the worker node that was causing the issue.
185
186 === **Collaboratory Bucket is currently full** **(2022-11-03)(Done)** ===
187
188 The quota for the Collaboratory Bucket is currently full. We are in the process of increasing this limit and will restore service as soon as possible. In the meantime, you still have read access to files that are stored in collab Buckets.
189 \\Update: Quota has been increased and full service has been restored.
190
191 === **Six EBRAINS services will experience a short downtime (2022-10-27) (Done)** ===
192
193 The following 6 services will experience a short downtime today in order for us and the infrastructure team at CSCS to consolidate the status of the corresponding servers. We plan to stop the services one after the other for an estimated 5 minutes each during a 30-minute window (see banner). The services are:
194
195 1. Collaboratory IAM: authentication of users
196 1. Knowledge Graph Search
197 1. Neuroglancer: for the 3D visualisation of brain images
198 1. PLUS: an HBP internal project coordination tool
199 1. An internal tool for monitoring the EBRAINS platform which is not user facing
200 1. A deprecated service which we are maintaining to improve the user experience of a few users
201
202 The maintenance on each of these services has been completed.
203
204 === **EBRAINS infrastructure causing issues with the Drive (2022-10-19) (Fixed)** ===
205
206 The Drive is currently down due to another issue at CSCS. We are working on getting this resolved as soon as possible. We will be checking with CSCS on their root cause analysis and see that steps are taken to avoid the recurrence of such an issue.
207
208 (% style="color:#3498db" %)Update 2022-10-19 18:30 CEST(%%): We again have access to the filesystem which hosts the files of the Drive. Unfortunately, the Drives of a few collabs have been corrupted(% style="color:#e74c3c" %) (%%)amid the multiple outages today. We are recovering the contents of the Drives of those collabs at least to their status last night. We are hoping that we might recover some of the modifications to files in the Drive (uploads and edits) today but it may be that they have been lost. We apologize for the inconvenience.
209
210 (% style="color:#3498db" %)Update 2022-10-19 23:00 CEST(%%): The verification/repair of the filesystem is still running. It should be finished by tomorrow morning. We will make the Drive service available first thing in the morning if the repair does not identify any blocking points.
211
212 (% style="color:#3498db" %)Update 2022-10-20 10:00 CEST(%%): The verification/repair of the filesystem has finished. The Drive is now available for use again. (% style="color:#e74c3c" %)**Some data loss**(%%) has occurred for content uploaded or edited yesterday and we apologise for the inconvenience this has caused.
213
214 === **EBRAINS infrastructure was down this morning (2022-10-19) (Fixed)** ===
215
216 The cloud infrastructure at that hosts most EBRAINS services had at the [[Swiss National Computing Center (CSCS)>>https://cscs.ch]] experienced some downtime this morning from 9:25 to 10:15 CEST. This interrupted access to the storage of most of our services making them unavailable to end users. Service is up again.
217
218 We will update this page with more information when we receive information about the root causes.
219
220 We apologize to EBRAINS users for the down time. Please [[contact Support>>https://ebrains.eu/support]] if you still experience issues with any service.
221
222 === **Drive is currently down (2022-10-17) (Fixed)** ===
223
224 The Drive is currently down and not usable. This is due to infrastructure issues and service will be restored as soon as possible.
225
226 Update 2022-10-17 21:30 CEST: CSCS and RedHat are still debugging the issue.
227
228 2022-10-18: The issue has been resolved and the Drive is now accessible again. The issue was caused by the restart of an Openstack hypervisor with an inconsistent status of volumes in Openstack. This locked the VM running the Collaboratory Drive in a shutoff state.
229
230 === **Unable to save files to Drive (2022-10-04) (Fixed)** ===
231
232 The file system that the Drive runs on is currently full. Unfortunately, our detection of the file system being nearly full did not work. As such users cannot upload files nor save any changes to their files in the Drive. Due to another, unrelated issue, it is not possible for us to simply expand the file system. For this reason, we are currently in the process of moving the Drive data to a bigger volume. This is causing the delay. We will update this page as soon as we are finished with the move.
233 \\Please note that as most services run off the Drive, this also affects the Lab, although you can run Notebooks and even modify notebooks, any changes you make will not be saved.
234
235 NOTE: We transferred everything from the affected volume to a larger volume, the service is now usable again.
236
237 === **No uploads allowed to Data-proxy (2022-08-31) (Fixed)** ===
238
239 At the moment uploads are not permitted due to the data-proxy exceeding its quota allowance. We are working to solve this as soon as possible. 
240 \\**Fixed: **Quota was increased, files can no be uploaded again.
241
242 === **Collaboratory Drive maintenance (2022-08-19) (Completed)** ===
243
244 The Drive was meant to be taken down for routine maintenance to increase the available space available for Drive storage this afternoon. That operation has had to be rescheduled due to technical issues on the storage infrastructure.
245
246 === **Intermittent issues with the Bucket (data-proxy) (Solved)** ===
247
248 As reported by the main banner, there had been intermittent issues with the Bucket occasionally going down for a short amount of time. This has been resolved by the maintenance performed at CSCS on August 10th. If you encounter any further issues related to the Bucket, please open a ticket to support.
249
250 === **Storage and Cloud service maintenance (2022-08-10) (Completed)** ===
251
252 This maintenance was shifted from August 3 to August 10 due to an incident on another server managed by the ETHZ central IT services.
253
254 A maintenance operation at CSCS requires that some HBP/EBRAINS services be stopped Wednesday August 3 morning. The services affected are those using NFS storage volumes on the Castor cloud service. EBRAINS service providers that migrate their VMs to CEPH storage ahead of that date can keep their services running during the maintenance.
255
256 **__Timeline__**: all times CEST
257
258 * **08:00**: Service providers shutdown services running on OpenStack or OpenShift at CSCS
259 * **08:30**: Maintenance start by CSCS team
260 * **12:00**: Planned maintenance end by CSCS team. Service providers check that services have come back online.(% style="color:#95a5a6" %) (% style="color:#1abc9c" %)Check this page for updates
261 * (% style="color:#1abc9c" %)15:20: Maintenance ended at 15:20 CEST time.
262
263 The storage back-end used by HBP/EBRAINS services has been causing some issues which have had repercussions on access to the object storage and OpenStack cloud service and thereby on HBP/EBRAINS services which run on this infrastructure. The issue has been identified and CSCS is ready to deploy a patch on the storage back-end. This will require that services running on OpenStack at CSCS be stopped for the duration of the maintenance.
264
265 There is never a good time for maintenance. We’re heading into a few weeks when more users will be on vacation, and some of the service providers may also be away. Hopefully this will impact as few people as possible. We apologize in advance for any inconvenience the downtime may cause.
266
267 === **Infrastructure issues at CSCS (2022-08-01)** ===
268
269 The infrastructure at CSCS on which EBRAINS services run has failed over the weekend. August 1 was a bank holiday in Switzerland where CSCS is located. The situation was recovered before 10:00 CEST on Tuesday August 2.
270
271 The services affected were all those running on the OpenShift service at CSCS including:
272
273 * Collaboratory Lab at CSCS (please choose the JSC site when starting the Lab),
274 * image service,
275 * atlas viewers,
276 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
277
278 The planned maintenance listed below is expected to prevent the recurring issues that have been experienced over the past months.
279
280 We apologize for the inconvenience.
281
282 === **Infrastructure issues (2022-07-13)** ===
283
284 Several services on our EBRAINS **OpenShift **server running at CSCS were detected as having issues.
285
286 These issues may potentially affect services running on that OpenShift server including:
287
288 * Collaboratory Lab at CSCS (please choose the JSC site when starting the Lab),
289 * image service,
290 * atlas viewers,
291 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
292
293 The OpenShift situation has been resolved. Some services may need to be restarted. Please contact [[Support>>https://ebrains.eu/support]] if you identify a problem.
294
295 The Collaboratory **Bucket **service (aka data proxy) has been intermittently down over the past several days due to an issue with the Swift archive/object storage at the infrastructure level at CSCS.
296
297 We are working with the CSCS team to identify the cause of the issues with the Bucket service and we are having the infrastructure restarted any time issues occur. You can notify [[Support>>https://ebrains.eu/support]] if you identify any down time.
298
299 === **HBP and EBRAINS websites issue (2022-06-29)** ===
300
301 The HBP and EBRAINS main websites are down because of an issue with the .io top level domain DNS. This issue is preventing our websites from rendering images, CSS files, and other files referenced from the HTML of the web pages. The issue is not internal to our infrastructure nor to that of our cloud provider. We have decided to bring down the two websites for the time being.
302
303 The issue was resolved in the afternoon shortly after our provider confirmed their service had returned to nominal status.
304
305 === **Issue with the Bucket service (2022-06-27)** ===
306
307 The bucket service (aka data proxy) is down due to an issue with the Swift archive/object storage at the infrastructure level at CSCS. The issue has been resolved. It was caused by the high availability load balancers located in front of the Swift storage at CSCS.
308
309 We apologize for the inconvenience.
310
311 === **Maintenance of the Drive (2022-04-14)** ===
312
313 The drive service will be down for maintenance on Thursday April 14 from 18:00 CEST for an estimated 2 to 3 hours. We are performing a routine garbage collection on the storage. During the downtime, users will not have access to the Drive: no reading/downloading of files, no adding/modifying files. This in turn means the Lab will not be usable either.
314
315 We apologize in advance for the interruption and thank you for your understanding.
316
317 The maintenance is done.
318
319 === **Issue with the Drive (2022-02-05)** ===
320
321 The Drive server had an issue starting Saturday Feb 5 from 5 AM CET. The service was brought back online and was operational on Saturday. The issue was related to a disk full on the server caused by logs which then had other issues rebooting. No data was lost and backups were not affected.
322
323 === **Issue with OpenShift (2022-01-17)** ===
324
325 One of the NGINX servers running in front of our OpenShift service is down due to a problem at the infrastructure level. We are in communication with the infrastructure team to get the issue resolved as quickly as possible.
326
327 The issue was resolved by the infrastructure team. The server at fault is running again. All services are up and accessible again.
328
329 === **Issue with the Collaboratory like/favourite feature (2022-01-13)** ===
330
331 The feature to like/favourite a collab was down for a day due to a change in the deployment of the Wiki service. This issue has been resolved without data loss. We apologise for any inconvenience caused by this issue.
332
333 === **Issue on the OpenShift server at CSCS (2021-12-28)** ===
334
335 The EBRAINS OpenShift server running at CSCS was detected as having come down. This issue affects all the services running on that OpenShift server including:
336
337 * Collaboratory Lab,
338 * image service,
339 * atlas viewers,
340 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
341
342 We are working with the CSCS team to reactivate the service ASAP.
343
344 The service was reestablished at around 14:15 CET. The infrastructure team will continue looking into the potential causes of the problem with RedHat. The suspected cause lies in network access to the storage.
345
346 === **Issue on the Forum service (2021-12-27)** ===
347
348 The Forum service went down over the weekend along with one of the redundant OpenShift servers. The issue was due to a problem at the infrastructure level.
349
350 The Forum was unavailable for a few hours on Monday. There was not perceptible effect on the services running on OpenShift.
351
352 === **Issue on the OpenShift server at CSCS (2021-11-24)** ===
353
354 The EBRAINS OpenShift server running at CSCS was detected as having come down at 16:00 CEST. This issue affects all the services running on that OpenShift server including:
355
356 * Collaboratory Lab,
357 * image service,
358 * atlas viewers,
359 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
360
361 We are working with the CSCS team and RedHat (provider of the OpenStack solution on which OpenShift runs) to reactivate the service ASAP.
362
363 The issue affected the central VM node of the OpenShift service. Restarting the VM required that we resolve multiple issues including restarting at the same IP address and unlocking the storage volumes of that VM. (% style="color:#16a085" %)The central node has now been restarted. The same issue has occurred on three other instances of OpenShift VMs. They have been restarted too. OpenShift is up and running again. The DNS redirection(s) to the maintenance page is being removed. The services should be operational again.
364
365 (% style="color:#16a085" %)The root cause seems to be at the infrastructure level and RedHat is involved in analyzing that.
366
367 We will keep updating this page with more information as it comes in.
368
369 === **Maintenance of multiple services (2021-11-11 to 2021-11-19)** ===
370
371 We are migrating the last of our services away from an old cloud infrastructure. The production services were taken down on Wednesday November 17 and are now all back online. We will continue working on the non-production services, to return the services to the redundancy level prior to the migration, and various final tweaks.
372
373 Please contact [[support>>https://ebrains.eu/support]] about any new issues you identify on our services.
374
375 The services which were migrated during this maintenance include:
376
377 * Collaboratory Bucket (% style="color:#2ecc71" %)**DONE**
378 * Knowledge Graph Search and API (% style="color:#2ecc71" %)**DONE**
379 * OpenShift servers which includes: (% style="color:#2ecc71" %)**DONE**
380 ** Collaboratory Lab, (% style="color:#2ecc71" %)**DONE**
381 ** image service, (% style="color:#2ecc71" %)**DONE**
382 ** atlas viewers,
383 ** simulation services (NEST desktop, TVB, Brain Simulation Platform)
384
385 * EBRAINS Docker registry, (% style="color:#2ecc71" %)**DONE**
386 * the Forum linked from the Collaboratory Wiki, (% style="color:#2ecc71" %)**DONE** (may have login issues)
387 * the Education website. (% style="color:#2ecc71" %)**DONE**
388
389 We apologize for the inconvenience and thank you for your understanding.
390
391 === **Technical difficulties on the Drive (2021-11-03)** ===
392
393 The Drive has experienced some technical difficulties over the past weeks caused by multiple issues.
394
395 A reboot of the Drive server due to a hardware failure revealed an issue in the communication between the Lab and the Drive. A container is started for each Lab user who opens the Lab in their browser to run their notebooks. That container remains active by default well beyond the end of the execution of the notebook and the closing of the browser tab running the Lab, unless the users explicitly stop their container. Those containers were causing unusually high traffic to the Drive. We identified the problem and found a solution with an upgrade of the tool which the Drive is based on. We have had to shift the deployment date for this fix multiple times but we will be announcing the deployment date ASAP. In the meantime **we ask Lab users to kindly stop their servers** from the JupyterLab menu when they are not actively working in the Lab. This is done from the JupyterLab menu: //File > Hub control panel > Stop my server//.
396
397 The cloud infrastructure that the Drive is running on is being upgraded and the Drive service had to be moved to the newer servers. An incorrect management command of the server caused the migration of the data to be launched twice, with the second transfer overwriting production data dated Nov 2 with stale data from Oct 20. Several backup issues stacked on top of this problem. We have been able to recover a backup which was made on the night of Sunday Oct 31 to Monday Nov 1. We are verifying that we cannot identify issues with this data before we reopen access to the Drive. **Some data loss is expected** for data stored on the Drive after that backup and before we put up a banner asking users, as a precaution, not to add new data to the Drive on Tuesday Nov 2 around noon. Many countries had a bank holiday on Nov 1 so for many users, the data loss should be of a few hours on Nov 2.
398
399 We sincerely apologize for the problems incurred. We appreciate that the service we provide must be trustworthy for our users. We are taking the following actions to ensure that similar problems do not happen again:
400
401 * Review of the backup procedures on the Drive server which will be extended to all EBRAINS services which we manage.
402 * Request notification from the underlying infrastructure when service accounts reach the limit imposed by their storage quota.
403 * Improve the identification of production servers to reduce the risk of the operations team misidentifying which servers are running production service.
404 * Improve our procedure for communication about such incidents.
405
406 (% style="color:#16a085" %)**The Drive service is up again. We have recovered all the data that we could. The Drive is working normally again. We will be performing a Drive upgrade ASAP. check the Wiki banner for more information.**
407
408 Additionally, the Collaboratory team will be looking into mirroring services onto more than one Fenix site to reduce down time when incidents occur. This task will be more challenging for some services than for others; it might not be materially feasible for some services, but we will be analysing possibilities for each Collaboratory service.
409
410 === **Maintenance on 2021-10-19** ===
411
412 Due to migration to new hardware, various EBRAINS services will be down for up to 1 hour while these services and their data are migrated. Please See our [[Releases >>doc:Collabs.the-collaboratory.Releases.WebHome]]page for more information on our release.
413
414 === **Infrastructure failure on 2021-10-04** ===
415
416 A failure on the Openstack infrastructure at the Fenix site which hosts most of our services has brought down many EBRAINS service this morning before 9 AM CEST. A second event later in the morning at the same Fenix site brought down most other EBRAINS services. The infrastructure was brought back online in the afternoon and we have been reactivating services as quickly as possible. Some issues are still being addressed at the infrastructure level [17:00 CEST].
417
418 We apologize for the inconvenience and thank all users for their understanding.
419
420 === **SSL Certificates expiring September 30th** ===
421
422 Our SSL certificates are produced by Let's Encrypt. One of their root certificates is expiring on September 30. This may cause issues for access to API services, especially from older browsers. You can find more information about these changes at the following links :
423
424 * [[https:~~/~~/letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/>>url:https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/]]
425 * [[https:~~/~~/letsencrypt.org/certificates/>>url:https://letsencrypt.org/certificates/]]
426
427 As mentioned in the above links, if you are using one of our APIs, you should take the following steps to prevent issues:
428
429 * You must trust ISRG Root X1 (not just DST Root CA X3)
430 * If you use OpenSSL, you must have version 1.1.0 or later
431
432 If you access our services via a browser, please ensure that you are using a modern browser,  you can find more information here : [[https:~~/~~/letsencrypt.org/docs/certificate-compatibility/>>url:https://letsencrypt.org/docs/certificate-compatibility/]]
433
434 If you are using an older device, you may no longer be able to access our services correctly, for a list of non-compatible devices please see [[https:~~/~~/letsencrypt.org/docs/certificate-compatibility/>>url:https://letsencrypt.org/docs/certificate-compatibility/]].
435
436 === **Collaboratory 1 shutdown (2021-09-02)** ===
437
438 Collaboratory 1 is shutting down Wednesday September 8th. As such, please ensure you have [[migrated>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-migration/Tutorial/Migrate%20your%20data/]]  your data to Collaboratory 2. You can view our [[handy guide>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-migration/Tutorial/Overview/]] on what data is migrated by our tools, as well as what is not covered by the semi-automated tools.
439
440 If you follow the above links, you will also find recorded migrathons, where you can find a lot of information to help you migrate your data. If you encounter any issues or problems while migrating your data, please contact [[EBRAINS support>>https://ebrains.eu/support/]] and we will get back to you as soon as possible.
441
442 NOTE: You are currently viewing Collaboratory 2, which will not be shut down.
443
444 === **Openshift issues (2021-07-09) (Resolved)** ===
445
446 The Openshift service has gone down. As the Collaboratory Lab and the Image service both rely on this service, they are currently unavailable. We are investigating this issue and it will be resolved as soon as possible. We apologise for any inconvenience caused.
447
448 **UPDATE 2021-07-12: **The issue seems to be with the Openstack servers. It is also affecting a few more VMs aside from the Openshift service. The infrastructure team is looking into it.
449
450 **UPDATE 2021-07-12: **The Openstack issue has been fixed. Services should be back up and running. Thank you for your patience.
451
452 === **Collaboratory Lab restart (2021-06-15)** ===
453
454 The Collaboratory lab will be restarted for the purpose of changing a few background configurations.
455
456 === **Collaboratory User Group (UG) meeting** ===
457
458 The UG meeting is a great way to stay informed about recent and upcoming changes to the Collaboratory as well as making your voice and ideas heard directly by the developers. If you are interested in attending this meeting, or know someone who is interested, please feel free[[ to join these meetings>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-user-group/Joining%20the%20group/]]
459
460 If you would like to see new features or feel certain functionality is required to improve the Collaboratory, please [[join the Collaboratory user group>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-user-group/Joining%20the%20group/]], once joined, please do not hesitate to [[make your suggestions>>https://drive.ebrains.eu/lib/aac38e36-4bd9-48ee-9ae6-b876a65028aa/file/Collaboratory%20User%20Group%20-%20Suggestion%20box.docx]].
461
462
463 === **Drive issue (Resolved 2021-06-14)** ===
464
465 There is currently an issue that prevents the Drive option from appearing in new collabs. Although you can still create new collabs and can access all of the other functionality provided, you will not have access to the Drive by default. You can request a Drive be manually added to your collab via support.
466 \\We apologise for any inconvenience this causes. 
467
468
469 === **Drive issue (Solved)** ===
470
471 Users have reported issues trying to access the drive. This is something we are aware of and trying to fix as soon as possible. This issue seems to be an Openshift problem.
472
473 Until this is resolved, users will have issues accessing any functionality that requires Drive access. This mainly concerns the Lab and to a less extant, any service that requires access to the drive.
474
475 **Potential fix: **We believe we have identified the issue and will be bringing the Lab service down for up to an hour at 10pm CEST on April 28th as we attempt to deploy this fix. All current Notebooks will be killed during this downtime. - Unfortunately this fix has not worked and we are looking at other solutions at the moment.
476
477 **Potential fix 2: **We have another small release that should fix the issue at 13:30 CEST Today. All currently running notebooks will be killed during this downtime.
478
479 This issue has been resolved, please let support know if you encounter any issues with the lab. Thank you for your understanding.
480
481 We apologise for any inconvenience this has caused
482
483 === OnlyOffice license issue (Resolved) ===
484
485 We are aware that some users are getting issues when trying to use OnlyOffice similar to the following image~:
486
487 [[image:1613726704318-340.png]].
488
489 We are working on this issue and will have it fixed as soon as possible, thank you for your patience and we apologise for any inconvenience caused.
490
491
492 === SSL certificates (2020-12-04) (Resolved) ===
493
494 Our SSL Certificate provider (Let's Encrypt) has been using new certificates to sign our SSL certificates as described on their [[website>>https://letsencrypt.org/certificates]]. This requires updates on our servers and on your web browsers. The updates on your web browsers often go unnoticed by the user but in some cases your browser may inform you that it needs to fetch a new certificate. You can accept this safely; most browsers don't even mention it to the user.
495
496 Note that this is not the same as accepting an exception in your browser because of a bad certificate. Accepting such exceptions can put your data and credentials at risk of a person-in-the-middle attack.
497
498 We believe we have addressed all the issues that may arise from this change in SSL certificates. If you experience any issue, especially when accessing EBRAINS APIs, don't hesitate to contact [[support>>https://ebrains.eu/support]].
499
500
501 === Files locked in the Drive (2020-09-30) (Resolved) ===
502
503 A few users have reported issues in saving OnlyOffice edits to the Drive of a collab.
504
505 We are looking into this. At the time being, it seems that a small number of files are locked by the Drive which prevents OnlyOffice to update these files when they are edited online.
506
507 We are actively trying to correct the problem as promptly as possible.
508
509 ==== The temporary workaround ====
510
511 When you open a file in OnlyOffice:
512
513 1. check if other users are editing the same file. The count appears at the top right corner.
514 1. perform a trivial edit and save
515 1. if you are the only user with that file open, you can trust the presence/absence of a warning message from OnlyOffice
516 1. if multiple users have the file open simultaneously, you can:
517 11. choose to trust the first user who opened it to have checked, or
518 11. check in the Drive the timestamp of the file that you just saved
519
520 OnlyOffice will notify you when you try to save a file and the save fails. If that happens:
521
522 1. read the message
523 1. use File > Download as ... > docx or pptx or xlsx to save the file to your computer
524 1. close OnlyOffice. (% style="color:#e74c3c" %)**DO NOT KEEP LOCKED FILES OPEN IN ONLYOFFICE.**(%%) It would mislead others.
525 1. rename the file by adding (% style="color:#3498db" %)//**_UNLOCKED**//(%%) at the end of the filename, before the extension
526 1. upload the file with its new name
527 1. work on this new file
528
529 We apologize for the inconvenience and thank you for your understanding.
530
531 The Collaboratory team