User Tools

Site Tools


FIXME Needs more content!

Oh, interesting what you're doing here, but…

…why are there delays and spelling errors?

Our subtitles are done by people typing concurrently in teams by 3-4 persons. What is heard has to be typed and sent to the subtitles system, additionally it has to be coordinated who is typing what and sending which line. All of this needs high concentration.

This process naturally adds some latency that cannot be avoided, neither by human nor technically. Missing or double lines and typing errors can occure easily in this process and cannot be fully prevented. There's no time for corrections at this processing speed. But these quite obvious errors do not have big relevance, as they do not prevent people to understand the content of the talk.

100% perfection are not yet possible in a live environment. Even professional machine stenographers in the US region make live some mistakes.

…why is it not enough to show subtitles online live for tablets or laptops?

Our understanding of accessibility is, that participation should be possible, without any “precautionary measures”. Every congress visitor should be able to go to a talk and participate just as anyone else. Even though the NOC does fantastic work to provide great wifi connectivity, there are physical limits and it's not reliable enough for this task. Accessibility should not depend on working wifi, if you have a valid data plan on your mobile device, or you have enough power in your battery – or if you even have an appropriate digital device with you.

…why do you do subtitles instead of interpreting into sign language?

Because translating into sign language needs people capable of that. There are too few of them going to congress and being able to deal with the vocabular there. There are far more people typing fast on congress.

(German) sign language can only be understood by a smaller circle of people. More people benefit from subtitles such as hearing-impaired, non-native speaker and deaf non-German congress visitors. Sign language is not international. Every country has it's own and there are regional idioms. Translating an english talk into german sign language is a difficult challenge.

…why don't you use automatic speech recognition?

Because it doesn't work. Not yet. Yes, we tried. Yes, that will change, but not tomorrow and not the day after tomorrow. As soon as it does, we are really not unhappy to not having to type by hand anymore. But we aren't there yet. If you want to work on that, go ahead, but remember that – with all the success of neuronal networks, Siri, Google Now and Cortana – it is not an easy problem. Something that seems to be trivial for our ears, such as laughter or applause, is still quite impossible for current machine solutions.

Congress is unsuitable to test such systems.

…why don't you use speech recognition instead of typing by repeating what was said by the speaker?

We tried. Works partly. Works best in situations, where we are really good with typing as well. But there are problems (misinterpretation, vocabulary) and few people who are trained to do that. We would depend on those and ressources such interpreter rooms far more than with our current method.

It also is just as demanding as typing. Why is it done in professional environments? Because there's not the option to type in bigger teams, as we do. But if you want to experiment in this field: Go ahead! Just don't expect any miracles there or that we will just start doing it this way.

…why don't you leave the task to professional captioners?

Just as sign language interpreters you would need people being able to do that. We had somebody trained as captioner on 31c3 and 32c3 as part of our team. But their professional workflow differs from ours. They cannot do 1:1 subtitles in such a speed, have to shorten and create shortcuts specific for a talk, that have to be prepared and not everybody is fine with English talks. We don't want to shorten if possible.

We are happy about professional captioners and their experience in our team, but still think that for our very technical vocabulary our team method is preferable.

…how about stenography?

The situation is a bit different with machine stenography, especially like it is done in the US. They use another technology which might be quite applicable to our use case. If you can do so, please contact us. We have a steno machine (Stentura 400 SRT) and it could be used.

We think, we have good results for the ressources we have (well qualified angels!). Angels visiting the congress are familiar with the vocabulary used, quite an advantage! We think our results are just as well – or maybe even better – than those, that professional captioners not otherwise specialised could do in our environment, being quite cheaper this way.

Please have faith, that we put in quite some thoughts about why we do it the way we do it. If you have any questions just ask us in IRC.

Oh yes and by the way, not timing is the time-consuming task, transcription is.

I want to use subtitles

FIXME Needs more content!

I am a Subtitles Angel

I signed up on the Engelsystem, but have not been unlocked yet

You will be unlocked after taking part in a subtitles angel introduction. You will also be made familiar with the live subtitles interface there.

When do the introductions take place?

Regularly. At least daily, and even more often if there's need and resources. On day 1, the first introduction will be at 10am in Room C1. For more information see the congress' wiki

Anything else to bring?

Bring your own device! Unfortunately we cannot provide laptops, so please bring your own computer. Bringing a network cable is also a good idea. Apart from that: please have a DECT phone, preferably, or at least another Eventphone connection.

Any system requirements?

A web browser in recent version. Preferably Chromium/Chrome, but Firefox should also work well. IE is yuck. Your computer should have a working Ethernet connection (WiFi is not reliable enough for us).

I'm not on location, can I still write live subtitles?

Unfortunately not, the stream delay is too high for this purpose. Be aware, that our subtitles interface requires teamwork and coordination. This won't be possible off-site.

I already wrote live subtitles and have suggestions to improve the Live subtitles interface

Great! Talk to us in IRC and go to L2S2's github repo and create an issue there!

I want to create subtitles for the recordings

Great! Please read the manual. Carefully.

Why don't we usually use the live subtitles for creating the subtitles for the recordings?

First, because we don't have access to them right after congress. They are kept in a database that is within some hardware, not accessible to us during dismantling an the days/weeks after that.

But also just because they are not usefull. Just compare our live subtitles with the work of some interpreter in live shows in TV with the acurate translation of a subtitle of a movie, that was well prepared. You would not use the stammering – sorry – of the first, to create an acurate replication of what was that if you have the time for that.

It's usually quite less work, to just rewrite word by word (but that continously) again, than to correct and jump and copy here and delete there and change the capitals, etc. Believe us, we tried quite some times. There MIGHT be exceptions to this rule, but than we can deal with them exceptionally also.

FIXME Needs more content!

en/faq.txt · Last modified: 2020/09/19 22:03 by

Donate Powered by PHP Valid HTML5 Valid CSS Driven by DokuWiki