Artificial Intelligence for Teachers

 View Only
  • 1.  NUMBERS DON'T LIE BUT DOES AI?

    Posted 03-31-2024 12:57 AM

    While numbers are often seen as objective and factual, the data used to train artificial intelligence (AI) models can introduce biases that may lead to discriminatory outcomes, such as racial profiling (Nicoletti, 2023). AI algorithms learn from the data they are fed, and if that data contains biases, the AI system can inadvertently perpetuate and amplify those biases in its decision-making processes.

    To address this issue, it is essential to implement measures such as diverse and representative data collection, rigorous testing for bias, and continuous monitoring of AI systems in real-world applications. Additionally, incorporating ethical guidelines and diverse perspectives into the development and deployment of AI can help mitigate bias and promote fairness.

    Increasing transparency in AI algorithms and decision-making processes is crucial to understanding and addressing bias. Collaboration among technologists, ethicists, policymakers, and affected communities is key to developing more equitable and inclusive AI systems. By actively evaluating and improving AI technologies to align with ethical standards and uphold human dignity, we can strive towards a future where AI does not perpetuate harmful biases and instead contributes to a more just society.

     

    Reference

    Nicoletti, L. (2023, June 13). AI Has Some Serious Racial Profiling Issues. Bloomberg

    Retrieved from https://www.bloomberg.com/graphics/2023-generative-ai-bias/

    https://www.bloomberg.com/graphics/2023-generative-ai-bias/



    ------------------------------
    Brittney McCarey
    Grand Canyon University (Active Student)
    Aspiring Educator OEA/NEA
    ------------------------------


  • 2.  RE: NUMBERS DON'T LIE BUT DOES AI?

    Posted 6 days ago

    Chat GPT also has what is known as hallucinations, where information is wholly fabricated. The internet has many false and inaccurate information and the LLM cannot distinguish the lie from the truth, it will only repeat the information and guess the next sets of words. Faux court case citations, faux sources for material, faux facts regarding the James-Webber telescope, and many other fauz facts were given to AI chatbox requests. There are some tips to reduce the hallucinations, but as always, fact-check any result of a result from Chat GPT or any of the other AI-driven chat boxes:

    • Keep your request clear and precise. 
    • Avoid merging unrelated concepts.
    • Avoid impossible scenarios.
    • Avoid contradicting facts and misusing scientific terms.
    • Avoid assigning uncharacteristic properties and blending different realities.

    Reducing hallucinations in searches with AI will give better results. However, the results should always include a fact check.

    For more information:

    Alkaissi, Hussam, and Samy McFarlane. "Artificial Hallucinations in ChatGPT: Implications in Scientific Writing." Cureus, vol. 15, no. 2, 19 Feb. 2023, https://doi.org/10.7759/cureus.35179.

    Gewirtz, David. "8 Ways to Reduce ChatGPT Hallucinations." ZDNET, 9 Oct. 2023, www.zdnet.com/article/8-ways-to-reduce-chatgpt-hallucinations/.

    Metz, C. (2023, November 6). Chatbots May "Hallucinate" More Often Than Many Realize. The New York Times. https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html



    ------------------------------
    Morgan Harrell, M.Ed.
    ------------------------------