topics

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Both sides previous revision Previous revision
Next revision
Previous revision
topics [2022-11-04 08:50] elintopics [2022-11-04 13:43] (current) elin
Line 1: Line 1:
-=== The topics discussed during the clinic === +===== Eight topics were suggested and discussed in the workshop: =====
- +
- +
- +
-https://hait.cs.lth.se/_media/emergence.jpeg +
- +
- +
- +
-https://hait.cs.lth.se/_media/implementation_methods_for_industry.jpeg +
- +
-https://hait.cs.lth.se/_media/individualized_ai.jpeg +
- +
-https://hait.cs.lth.se/_media/shared_workload.jpeg +
- +
-https://hait.cs.lth.se/_media/silent_failures.jpeg +
- +
-https://hait.cs.lth.se/_media/trust_and_transparency.jpeg +
- +
- +
-{{:cognitive_offload.jpeg?nolink&400|}} +
-===== Eight topics were suggested and discussed in the workshop (in no particular order): =====+
  
   * Cognitive offload   * Cognitive offload
   * Hybrid cognitive systems   * Hybrid cognitive systems
   * Shared workload   * Shared workload
-  * Emergence 
-  * Implementation methods for industry 
   * Individualised AI   * Individualised AI
-  * Silent failures 
   * Trust and transparency   * Trust and transparency
 +  * Silent failures
 +  * Emergence
 +  * Implementation methods for industry
  
 The following paragraphs try to summarise the most important aspects that can be distilled from the notes. Passages, phrases, concepts, or simply words that have been filled through educated guessing by the editor (Elin), are marked in brackets [...]. Terms that are taken directly from the notes are set in //italics//. The following paragraphs try to summarise the most important aspects that can be distilled from the notes. Passages, phrases, concepts, or simply words that have been filled through educated guessing by the editor (Elin), are marked in brackets [...]. Terms that are taken directly from the notes are set in //italics//.
Line 41: Line 21:
  
 <color #ff7f27>4 markers</color> <color #ff7f27>4 markers</color>
 +
 +<color #ed1c24>5 markers</color>
  
 === Cognitive offload === === Cognitive offload ===
Line 75: Line 57:
  
 {{:individualized_ai.jpeg?nolink&400|}}  {{:individualized_ai.jpeg?nolink&400|}} 
 +
 +=== Trust and transparency ===
 +
 +Two aspects, that would link to the following topic (How to deal with silent failures), are that [the AI] //<color #7092be>must understand why it made a mistake</color>// and to //<color #7092be>explain when disagreeing</color>//. For trust to be established, [the system / human?] //<color #7092be>must be predictable</color>//, which can include //simulation// as a tool. This altogether is, however, not //<color #00a2e8>necessarily inherent to AI based systems, it could also be a "normal" algorithm</color>// that is discussed. Further, the issue of //<color #7092be>liability</color>// is mentioned. Topics for research are suggested as //<color #22b14c>visualising AI decision making (also for different types of users</color>// and //digital twin//.
 +
 +See also the original chart from the workshop:
 +
 +{{:trust_and_transparency.jpeg?nolink&400|}}
 +
 +=== Silent failures ===
 +
 +//<color #ed1c24>"How to deal with silent failures?"</color>// was the overall question, that was seen as highly relevant. As a silent failure a situation was described, where a system (the AI) functions according to specifications, but misunderstands the human due to ambiguities in what is said, done, or observed. The system response is then clear and "correct", while the system behaviour seems "off" in the eyes of the observer (user / collaborator). Concrete suggestions to work on several possible ways of avoiding such situations were given. Using //probability measures or confidence levels// was one suggestion, entailing a follow-up question of //how to determine these reliably and transparently//. Systems, that would be able to //<color #7092be>confirm from time to time</color> [without being obnoxious]// or give //<color #7092be>better feedback at instruction</color>// were mentioned, as well as //<color #00a2e8>simulation or roll-outs</color>//
 +
 +See also the original chart from the workshop:
 +
 +{{:silent_failures.jpeg?nolink&400|}}
 +
 +=== Emergence ===
 +
 +Topics under this umbrella term were //<color #7092be>definitions [in terms of whether it is possible to find any], e.g. flocking, crowds, etc.</color>// Also relevant was the question whether it is possible to //predict and evaluate emergent behaviour (and whether this should be seen as benefit or risk)//. Further there were sub-topics discussed like //how to leverage emergence// and [how to work with it / research it in] //theory and practice//.
 +
 +See also the original chart from the workshop here:
 +
 +{{:emergence.jpeg?nolink&400|}}
 +
 +=== Implementation methods for the industry ===
 +
 +Central topics/issues in this area are still to //<color #22b14c>establish use cases and scenarios that are applicable and relevant for the industry</color>//, to //<color #ff7f27>close the gap between academia and industry</color>//, and to //<color #7092be>demonstrate concrete measurable added value [of HAIT] to the industry</color>//. This would include to talk about actual //<color #ff7f27>tools instead of simple models</color>//, and to even consider a //"human readiness level"// [rather than TRL].
 +
 +See also the original chart from the workshop:
 +
 +{{:implementation_methods_for_industry.jpeg?nolink&400|}}
  • topics.1667551806.txt.gz
  • Last modified: 2022-11-04 08:50
  • by elin