Wiki source code of Announcements

Version 153.1 by mmorgan on 2022/10/19 15:25

Hide last authors
mmorgan 150.1 1 === **EBRAINS infrastructure causing issues with the Drive (2022-10-19)** ===
mmorgan 146.1 2
hbpadmin 149.1 3 The Drive is currently down due to another issue at CSCS. We are working on getting this resolved as soon as possible.
4
mmorgan 150.1 5 === **EBRAINS infrastructure was down this morning (2022-10-19) (Fixed)** ===
hbpadmin 149.1 6
mmorgan 146.1 7 The cloud infrastructure at that hosts most EBRAINS services had at the [[Swiss National Computing Center (CSCS)>>https://cscs.ch]] experienced some downtime this morning from 9:25 to 10:15 CEST. This interrupted access to the storage of most of our services making them unavailable to end users. Service is up again.
8
9 We will update this page with more information when we receive information about the root causes.
10
11 We apologize to EBRAINS users for the down time. Please [[contact Support>>https://ebrains.eu/support]] if you still experience issues with any service.
12
mmorgan 152.1 13 === **Drive is currently down (2022-10-17) (Fixed)** ===
hbpadmin 142.1 14
mmorgan 143.1 15 The Drive is currently down and not usable. This is due to infrastructure issues and service will be restored as soon as possible.
hbpadmin 142.1 16
mmorgan 145.1 17 Update 2022-10-17 21:30 CEST: CSCS and RedHat are still debugging the issue.
mmorgan 143.1 18
mmorgan 151.1 19 2022-10-18: The issue has been resolved and the Drive is now accessible again. The issue was caused by the restart of an Openstack hypervisor with an inconsistent status of volumes in Openstack. This locked the VM running the Collaboratory Drive in a shutoff state.
chaney08 144.1 20
mmorgan 153.1 21 === **Unable to save files to Drive (2022-10-04) (Fixed)** ===
hbpadmin 139.1 22
23 The file system that the Drive runs on is currently full. Unfortunately, our detection of the file system being nearly full did not work. As such users cannot upload files nor save any changes to their files in the Drive. Due to another, unrelated issue, it is not possible for us to simply expand the file system. For this reason, we are currently in the process of moving the Drive data to a bigger volume. This is causing the delay. We will update this page as soon as we are finished with the move.
hbpadmin 140.1 24 \\Please note that as most services run off the Drive, this also affects the Lab, although you can run Notebooks and even modify notebooks, any changes you make will not be saved.
hbpadmin 139.1 25
mmorgan 143.1 26 NOTE: We transferred everything from the affected volume to a larger volume, the service is now usable again.
hbpadmin 141.1 27
mmorgan 153.1 28 === **No uploads allowed to Data-proxy (2022-08-31) (Fixed)** ===
hbpadmin 135.1 29
hbpadmin 138.1 30 At the moment uploads are not permitted due to the data-proxy exceeding its quota allowance. We are working to solve this as soon as possible. 
31 \\**Fixed: **Quota was increased, files can no be uploaded again.
hbpadmin 137.1 32
33 === **Collaboratory Drive maintenance (2022-08-19) (Completed)** ===
34
hbpadmin 136.1 35 The Drive was meant to be taken down for routine maintenance to increase the available space available for Drive storage this afternoon. That operation has had to be rescheduled due to technical issues on the storage infrastructure.
hbpadmin 135.1 36
chaney08 134.1 37 === **Intermittent issues with the Bucket (data-proxy) (Solved)** ===
38
39 As reported by the main banner, there had been intermittent issues with the Bucket occasionally going down for a short amount of time. This has been resolved by the maintenance performed at CSCS on August 10th. If you encounter any further issues related to the Bucket, please open a ticket to support.
40
mmorgan 133.1 41 === **Storage and Cloud service maintenance (2022-08-10) (Completed)** ===
mmorgan 129.1 42
mmorgan 131.1 43 This maintenance was shifted from August 3 to August 10 due to an incident on another server managed by the ETHZ central IT services.
44
mmorgan 130.1 45 A maintenance operation at CSCS requires that some HBP/EBRAINS services be stopped Wednesday August 3 morning. The services affected are those using NFS storage volumes on the Castor cloud service. EBRAINS service providers that migrate their VMs to CEPH storage ahead of that date can keep their services running during the maintenance.
mmorgan 129.1 46
mmorgan 130.1 47 **__Timeline__**: all times CEST
mmorgan 129.1 48
mmorgan 130.1 49 * **08:00**: Service providers shutdown services running on OpenStack or OpenShift at CSCS
50 * **08:30**: Maintenance start by CSCS team
51 * **12:00**: Planned maintenance end by CSCS team. Service providers check that services have come back online.(% style="color:#95a5a6" %) (% style="color:#1abc9c" %)Check this page for updates
chaney08 132.1 52 * (% style="color:#1abc9c" %)15:20: Maintenance ended at 15:20 CEST time.
mmorgan 129.1 53
mmorgan 130.1 54 The storage back-end used by HBP/EBRAINS services has been causing some issues which have had repercussions on access to the object storage and OpenStack cloud service and thereby on HBP/EBRAINS services which run on this infrastructure. The issue has been identified and CSCS is ready to deploy a patch on the storage back-end. This will require that services running on OpenStack at CSCS be stopped for the duration of the maintenance.
mmorgan 129.1 55
mmorgan 130.1 56 There is never a good time for maintenance. We’re heading into a few weeks when more users will be on vacation, and some of the service providers may also be away. Hopefully this will impact as few people as possible. We apologize in advance for any inconvenience the downtime may cause.
mmorgan 129.1 57
mmorgan 128.1 58 === **Infrastructure issues at CSCS (2022-08-01)** ===
mmorgan 127.1 59
mmorgan 129.1 60 The infrastructure at CSCS on which EBRAINS services run has failed over the weekend. August 1 was a bank holiday in Switzerland where CSCS is located. The situation was recovered before 10:00 CEST on Tuesday August 2.
mmorgan 127.1 61
mmorgan 129.1 62 The services affected were all those running on the OpenShift service at CSCS including:
mmorgan 127.1 63
mmorgan 128.1 64 * Collaboratory Lab at CSCS (please choose the JSC site when starting the Lab),
65 * image service,
66 * atlas viewers,
67 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
mmorgan 127.1 68
mmorgan 128.1 69 The planned maintenance listed below is expected to prevent the recurring issues that have been experienced over the past months.
mmorgan 127.1 70
mmorgan 128.1 71 We apologize for the inconvenience.
mmorgan 127.1 72
hbpadmin 121.1 73 === **Infrastructure issues (2022-07-13)** ===
74
mmorgan 125.1 75 Several services on our EBRAINS **OpenShift **server running at CSCS were detected as having issues.
hbpadmin 121.1 76
mmorgan 125.1 77 These issues may potentially affect services running on that OpenShift server including:
hbpadmin 121.1 78
mmorgan 125.1 79 * Collaboratory Lab at CSCS (please choose the JSC site when starting the Lab),
80 * image service,
81 * atlas viewers,
82 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
hbpadmin 121.1 83
mmorgan 126.1 84 The OpenShift situation has been resolved. Some services may need to be restarted. Please contact [[Support>>https://ebrains.eu/support]] if you identify a problem.
85
mmorgan 125.1 86 The Collaboratory **Bucket **service (aka data proxy) has been intermittently down over the past several days due to an issue with the Swift archive/object storage at the infrastructure level at CSCS.
87
mmorgan 126.1 88 We are working with the CSCS team to identify the cause of the issues with the Bucket service and we are having the infrastructure restarted any time issues occur. You can notify [[Support>>https://ebrains.eu/support]] if you identify any down time.
mmorgan 125.1 89
mmorgan 119.1 90 === **HBP and EBRAINS websites issue (2022-06-29)** ===
91
mmorgan 120.1 92 The HBP and EBRAINS main websites are down because of an issue with the .io top level domain DNS. This issue is preventing our websites from rendering images, CSS files, and other files referenced from the HTML of the web pages. The issue is not internal to our infrastructure nor to that of our cloud provider. We have decided to bring down the two websites for the time being.
mmorgan 119.1 93
mmorgan 120.1 94 The issue was resolved in the afternoon shortly after our provider confirmed their service had returned to nominal status.
95
mmorgan 117.1 96 === **Issue with the Bucket service (2022-06-27)** ===
97
mmorgan 118.1 98 The bucket service (aka data proxy) is down due to an issue with the Swift archive/object storage at the infrastructure level at CSCS. The issue has been resolved. It was caused by the high availability load balancers located in front of the Swift storage at CSCS.
mmorgan 117.1 99
100 We apologize for the inconvenience.
101
mmorgan 114.1 102 === **Maintenance of the Drive (2022-04-14)** ===
103
mmorgan 115.1 104 The drive service will be down for maintenance on Thursday April 14 from 18:00 CEST for an estimated 2 to 3 hours. We are performing a routine garbage collection on the storage. During the downtime, users will not have access to the Drive: no reading/downloading of files, no adding/modifying files. This in turn means the Lab will not be usable either.
mmorgan 114.1 105
106 We apologize in advance for the interruption and thank you for your understanding.
107
mmorgan 116.1 108 The maintenance is done.
109
mmorgan 112.1 110 === **Issue with the Drive (2022-02-05)** ===
111
mmorgan 113.1 112 The Drive server had an issue starting Saturday Feb 5 from 5 AM CET. The service was brought back online and was operational on Saturday. The issue was related to a disk full on the server caused by logs which then had other issues rebooting. No data was lost and backups were not affected.
mmorgan 112.1 113
mmorgan 110.1 114 === **Issue with OpenShift (2022-01-17)** ===
115
mmorgan 111.1 116 One of the NGINX servers running in front of our OpenShift service is down due to a problem at the infrastructure level. We are in communication with the infrastructure team to get the issue resolved as quickly as possible.
mmorgan 110.1 117
mmorgan 111.1 118 The issue was resolved by the infrastructure team. The server at fault is running again. All services are up and accessible again.
119
hbpadmin 108.1 120 === **Issue with the Collaboratory like/favourite feature (2022-01-13)** ===
121
hbpadmin 109.1 122 The feature to like/favourite a collab was down for a day due to a change in the deployment of the Wiki service. This issue has been resolved without data loss. We apologise for any inconvenience caused by this issue.
hbpadmin 108.1 123
mmorgan 107.1 124 === **Issue on the OpenShift server at CSCS (2021-12-28)** ===
hbpadmin 105.1 125
126 The EBRAINS OpenShift server running at CSCS was detected as having come down. This issue affects all the services running on that OpenShift server including:
127
128 * Collaboratory Lab,
129 * image service,
130 * atlas viewers,
131 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
132
133 We are working with the CSCS team to reactivate the service ASAP.
134
mmorgan 106.1 135 The service was reestablished at around 14:15 CET. The infrastructure team will continue looking into the potential causes of the problem with RedHat. The suspected cause lies in network access to the storage.
hbpadmin 105.1 136
mmorgan 107.1 137 === **Issue on the Forum service (2021-12-27)** ===
mmorgan 106.1 138
139 The Forum service went down over the weekend along with one of the redundant OpenShift servers. The issue was due to a problem at the infrastructure level.
140
141 The Forum was unavailable for a few hours on Monday. There was not perceptible effect on the services running on OpenShift.
142
mmorgan 107.1 143 === **Issue on the OpenShift server at CSCS (2021-11-24)** ===
mmorgan 99.1 144
145 The EBRAINS OpenShift server running at CSCS was detected as having come down at 16:00 CEST. This issue affects all the services running on that OpenShift server including:
146
147 * Collaboratory Lab,
148 * image service,
149 * atlas viewers,
150 * simulation services (NEST desktop, TVB, Brain Simulation Platform)
151
mmorgan 102.1 152 We are working with the CSCS team and RedHat (provider of the OpenStack solution on which OpenShift runs) to reactivate the service ASAP.
mmorgan 99.1 153
mmorgan 104.1 154 The issue affected the central VM node of the OpenShift service. Restarting the VM required that we resolve multiple issues including restarting at the same IP address and unlocking the storage volumes of that VM. (% style="color:#16a085" %)The central node has now been restarted. The same issue has occurred on three other instances of OpenShift VMs. They have been restarted too. OpenShift is up and running again. The DNS redirection(s) to the maintenance page is being removed. The services should be operational again.
mmorgan 102.1 155
mmorgan 104.1 156 (% style="color:#16a085" %)The root cause seems to be at the infrastructure level and RedHat is involved in analyzing that.
157
mmorgan 100.1 158 We will keep updating this page with more information as it comes in.
159
mmorgan 91.1 160 === **Maintenance of multiple services (2021-11-11 to 2021-11-19)** ===
161
mmorgan 98.1 162 We are migrating the last of our services away from an old cloud infrastructure. The production services were taken down on Wednesday November 17 and are now all back online. We will continue working on the non-production services, to return the services to the redundancy level prior to the migration, and various final tweaks.
mmorgan 93.1 163
mmorgan 98.1 164 Please contact [[support>>https://ebrains.eu/support]] about any new issues you identify on our services.
mmorgan 93.1 165
mmorgan 95.1 166 The services which were migrated during this maintenance include:
mmorgan 91.1 167
mmorgan 94.1 168 * Collaboratory Bucket (% style="color:#2ecc71" %)**DONE**
mmorgan 96.1 169 * Knowledge Graph Search and API (% style="color:#2ecc71" %)**DONE**
mmorgan 97.1 170 * OpenShift servers which includes: (% style="color:#2ecc71" %)**DONE**
171 ** Collaboratory Lab, (% style="color:#2ecc71" %)**DONE**
172 ** image service, (% style="color:#2ecc71" %)**DONE**
mmorgan 91.1 173 ** atlas viewers,
174 ** simulation services (NEST desktop, TVB, Brain Simulation Platform)
175
mmorgan 94.1 176 * EBRAINS Docker registry, (% style="color:#2ecc71" %)**DONE**
mmorgan 95.1 177 * the Forum linked from the Collaboratory Wiki, (% style="color:#2ecc71" %)**DONE** (may have login issues)
mmorgan 94.1 178 * the Education website. (% style="color:#2ecc71" %)**DONE**
mmorgan 91.1 179
mmorgan 98.1 180 We apologize for the inconvenience and thank you for your understanding.
mmorgan 92.1 181
mmorgan 87.1 182 === **Technical difficulties on the Drive (2021-11-03)** ===
183
184 The Drive has experienced some technical difficulties over the past weeks caused by multiple issues.
185
186 A reboot of the Drive server due to a hardware failure revealed an issue in the communication between the Lab and the Drive. A container is started for each Lab user who opens the Lab in their browser to run their notebooks. That container remains active by default well beyond the end of the execution of the notebook and the closing of the browser tab running the Lab, unless the users explicitly stop their container. Those containers were causing unusually high traffic to the Drive. We identified the problem and found a solution with an upgrade of the tool which the Drive is based on. We have had to shift the deployment date for this fix multiple times but we will be announcing the deployment date ASAP. In the meantime **we ask Lab users to kindly stop their servers** from the JupyterLab menu when they are not actively working in the Lab. This is done from the JupyterLab menu: //File > Hub control panel > Stop my server//.
187
188 The cloud infrastructure that the Drive is running on is being upgraded and the Drive service had to be moved to the newer servers. An incorrect management command of the server caused the migration of the data to be launched twice, with the second transfer overwriting production data dated Nov 2 with stale data from Oct 20. Several backup issues stacked on top of this problem. We have been able to recover a backup which was made on the night of Sunday Oct 31 to Monday Nov 1. We are verifying that we cannot identify issues with this data before we reopen access to the Drive. **Some data loss is expected** for data stored on the Drive after that backup and before we put up a banner asking users, as a precaution, not to add new data to the Drive on Tuesday Nov 2 around noon. Many countries had a bank holiday on Nov 1 so for many users, the data loss should be of a few hours on Nov 2.
189
190 We sincerely apologize for the problems incurred. We appreciate that the service we provide must be trustworthy for our users. We are taking the following actions to ensure that similar problems do not happen again:
191
192 * Review of the backup procedures on the Drive server which will be extended to all EBRAINS services which we manage.
193 * Request notification from the underlying infrastructure when service accounts reach the limit imposed by their storage quota.
194 * Improve the identification of production servers to reduce the risk of the operations team misidentifying which servers are running production service.
mmorgan 90.1 195 * Improve our procedure for communication about such incidents.
mmorgan 87.1 196
mmorgan 89.1 197 (% style="color:#16a085" %)**The Drive service is up again. We have recovered all the data that we could. The Drive is working normally again. We will be performing a Drive upgrade ASAP. check the Wiki banner for more information.**
mmorgan 87.1 198
199 Additionally, the Collaboratory team will be looking into mirroring services onto more than one Fenix site to reduce down time when incidents occur. This task will be more challenging for some services than for others; it might not be materially feasible for some services, but we will be analysing possibilities for each Collaboratory service.
200
hbpadmin 86.1 201 === **Maintenance on 2021-10-19** ===
202
203 Due to migration to new hardware, various EBRAINS services will be down for up to 1 hour while these services and their data are migrated. Please See our [[Releases >>doc:Collabs.the-collaboratory.Releases.WebHome]]page for more information on our release.
204
mmorgan 84.1 205 === **Infrastructure failure on 2021-10-04** ===
206
mmorgan 85.1 207 A failure on the Openstack infrastructure at the Fenix site which hosts most of our services has brought down many EBRAINS service this morning before 9 AM CEST. A second event later in the morning at the same Fenix site brought down most other EBRAINS services. The infrastructure was brought back online in the afternoon and we have been reactivating services as quickly as possible. Some issues are still being addressed at the infrastructure level [17:00 CEST].
mmorgan 84.1 208
209 We apologize for the inconvenience and thank all users for their understanding.
210
chaney08 83.1 211 === **SSL Certificates expiring September 30th** ===
212
213 Our SSL certificates are produced by Let's Encrypt. One of their root certificates is expiring on September 30. This may cause issues for access to API services, especially from older browsers. You can find more information about these changes at the following links :
214
215 * [[https:~~/~~/letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/>>url:https://letsencrypt.org/docs/dst-root-ca-x3-expiration-september-2021/]]
216 * [[https:~~/~~/letsencrypt.org/certificates/>>url:https://letsencrypt.org/certificates/]]
217
218 As mentioned in the above links, if you are using one of our APIs, you should take the following steps to prevent issues:
219
220 * You must trust ISRG Root X1 (not just DST Root CA X3)
221 * If you use OpenSSL, you must have version 1.1.0 or later
222
223 If you access our services via a browser, please ensure that you are using a modern browser,  you can find more information here : [[https:~~/~~/letsencrypt.org/docs/certificate-compatibility/>>url:https://letsencrypt.org/docs/certificate-compatibility/]]
224
225 If you are using an older device, you may no longer be able to access our services correctly, for a list of non-compatible devices please see [[https:~~/~~/letsencrypt.org/docs/certificate-compatibility/>>url:https://letsencrypt.org/docs/certificate-compatibility/]].
226
mmorgan 80.1 227 === **Collaboratory 1 shutdown (2021-09-02)** ===
hbpadmin 79.1 228
chaney08 81.1 229 Collaboratory 1 is shutting down Wednesday September 8th. As such, please ensure you have [[migrated>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-migration/Tutorial/Migrate%20your%20data/]]  your data to Collaboratory 2. You can view our [[handy guide>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-migration/Tutorial/Overview/]] on what data is migrated by our tools, as well as what is not covered by the semi-automated tools.
hbpadmin 79.1 230
231 If you follow the above links, you will also find recorded migrathons, where you can find a lot of information to help you migrate your data. If you encounter any issues or problems while migrating your data, please contact [[EBRAINS support>>https://ebrains.eu/support/]] and we will get back to you as soon as possible.
232
chaney08 82.1 233 NOTE: You are currently viewing Collaboratory 2, which will not be shut down.
234
mmorgan 80.1 235 === **Openshift issues (2021-07-09) (Resolved)** ===
hbpadmin 72.1 236
mmorgan 76.1 237 The Openshift service has gone down. As the Collaboratory Lab and the Image service both rely on this service, they are currently unavailable. We are investigating this issue and it will be resolved as soon as possible. We apologise for any inconvenience caused.
hbpadmin 72.1 238
mmorgan 80.1 239 **UPDATE 2021-07-12: **The issue seems to be with the Openstack servers. It is also affecting a few more VMs aside from the Openshift service. The infrastructure team is looking into it.
chaney08 75.1 240
mmorgan 80.1 241 **UPDATE 2021-07-12: **The Openstack issue has been fixed. Services should be back up and running. Thank you for your patience.
chaney08 77.1 242
mmorgan 80.1 243 === **Collaboratory Lab restart (2021-06-15)** ===
hbpadmin 67.1 244
hbpadmin 69.1 245 The Collaboratory lab will be restarted for the purpose of changing a few background configurations.
246
hbpadmin 70.1 247 === **Collaboratory User Group (UG) meeting** ===
hbpadmin 27.1 248
hbpadmin 28.1 249 The UG meeting is a great way to stay informed about recent and upcoming changes to the Collaboratory as well as making your voice and ideas heard directly by the developers. If you are interested in attending this meeting, or know someone who is interested, please feel free[[ to join these meetings>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-user-group/Joining%20the%20group/]]
hbpadmin 27.1 250
chaney08 59.1 251 If you would like to see new features or feel certain functionality is required to improve the Collaboratory, please [[join the Collaboratory user group>>https://wiki.ebrains.eu/bin/view/Collabs/collaboratory-user-group/Joining%20the%20group/]], once joined, please do not hesitate to [[make your suggestions>>https://drive.ebrains.eu/lib/aac38e36-4bd9-48ee-9ae6-b876a65028aa/file/Collaboratory%20User%20Group%20-%20Suggestion%20box.docx]].
mmorgan 18.1 252
253
mmorgan 80.1 254 === **Drive issue (Resolved 2021-06-14)** ===
hbpadmin 70.1 255
256 There is currently an issue that prevents the Drive option from appearing in new collabs. Although you can still create new collabs and can access all of the other functionality provided, you will not have access to the Drive by default. You can request a Drive be manually added to your collab via support.
257 \\We apologise for any inconvenience this causes. 
258
259
chaney08 59.1 260 === **Drive issue (Solved)** ===
mmorgan 18.1 261
chaney08 59.1 262 Users have reported issues trying to access the drive. This is something we are aware of and trying to fix as soon as possible. This issue seems to be an Openshift problem.
hbpadmin 20.1 263
chaney08 59.1 264 Until this is resolved, users will have issues accessing any functionality that requires Drive access. This mainly concerns the Lab and to a less extant, any service that requires access to the drive.
hbpadmin 22.1 265
chaney08 59.1 266 **Potential fix: **We believe we have identified the issue and will be bringing the Lab service down for up to an hour at 10pm CEST on April 28th as we attempt to deploy this fix. All current Notebooks will be killed during this downtime. - Unfortunately this fix has not worked and we are looking at other solutions at the moment.
mmorgan 18.1 267
chaney08 59.1 268 **Potential fix 2: **We have another small release that should fix the issue at 13:30 CEST Today. All currently running notebooks will be killed during this downtime.
mmorgan 18.1 269
chaney08 59.1 270 This issue has been resolved, please let support know if you encounter any issues with the lab. Thank you for your understanding.
mmorgan 18.1 271
chaney08 59.1 272 We apologise for any inconvenience this has caused
hbpadmin 17.1 273
chaney08 59.1 274 === OnlyOffice license issue (Resolved) ===
275
hbpadmin 17.1 276 We are aware that some users are getting issues when trying to use OnlyOffice similar to the following image~:
277
mmorgan 18.1 278 [[image:1613726704318-340.png]].
hbpadmin 17.1 279
280 We are working on this issue and will have it fixed as soon as possible, thank you for your patience and we apologise for any inconvenience caused.
281
282
chaney08 59.1 283 === SSL certificates (2020-12-04) (Resolved) ===
hbpadmin 12.2 284
mmorgan 9.1 285 Our SSL Certificate provider (Let's Encrypt) has been using new certificates to sign our SSL certificates as described on their [[website>>https://letsencrypt.org/certificates]]. This requires updates on our servers and on your web browsers. The updates on your web browsers often go unnoticed by the user but in some cases your browser may inform you that it needs to fetch a new certificate. You can accept this safely; most browsers don't even mention it to the user.
mmorgan 7.1 286
287 Note that this is not the same as accepting an exception in your browser because of a bad certificate. Accepting such exceptions can put your data and credentials at risk of a person-in-the-middle attack.
288
289 We believe we have addressed all the issues that may arise from this change in SSL certificates. If you experience any issue, especially when accessing EBRAINS APIs, don't hesitate to contact [[support>>https://ebrains.eu/support]].
290
291
chaney08 59.1 292 === Files locked in the Drive (2020-09-30) (Resolved) ===
mmorgan 1.1 293
mmorgan 4.1 294 A few users have reported issues in saving OnlyOffice edits to the Drive of a collab.
mmorgan 1.1 295
mmorgan 4.1 296 We are looking into this. At the time being, it seems that a small number of files are locked by the Drive which prevents OnlyOffice to update these files when they are edited online.
mmorgan 1.1 297
mmorgan 4.1 298 We are actively trying to correct the problem as promptly as possible.
mmorgan 1.1 299
mmorgan 4.1 300 ==== The temporary workaround ====
mmorgan 2.1 301
mmorgan 4.1 302 When you open a file in OnlyOffice:
mmorgan 1.1 303
mmorgan 4.1 304 1. check if other users are editing the same file. The count appears at the top right corner.
305 1. perform a trivial edit and save
306 1. if you are the only user with that file open, you can trust the presence/absence of a warning message from OnlyOffice
307 1. if multiple users have the file open simultaneously, you can:
308 11. choose to trust the first user who opened it to have checked, or
309 11. check in the Drive the timestamp of the file that you just saved
mmorgan 1.1 310
mmorgan 4.1 311 OnlyOffice will notify you when you try to save a file and the save fails. If that happens:
mmorgan 1.1 312
mmorgan 4.1 313 1. read the message
314 1. use File > Download as ... > docx or pptx or xlsx to save the file to your computer
315 1. close OnlyOffice. (% style="color:#e74c3c" %)**DO NOT KEEP LOCKED FILES OPEN IN ONLYOFFICE.**(%%) It would mislead others.
316 1. rename the file by adding (% style="color:#3498db" %)//**_UNLOCKED**//(%%) at the end of the filename, before the extension
317 1. upload the file with its new name
318 1. work on this new file
mmorgan 1.1 319
mmorgan 4.1 320 We apologize for the inconvenience and thank you for your understanding.
321
322 The Collaboratory team