Meta & Fysikken: Afsnit 76: AI - Det store afsnit om kunstig intelligens

I dette afsnit tager vi et såkaldt “deep dive” ned i AI - kunstig intelligens! Det er noget der er på alles læber lige for tiden, og vi forholder os til de mange spændende muligheder det giver, men også til de problemer det potentielt kan give, samt ikke mindst de etiske dilemmaer og udfordringer der helt sikkert kommer.

Og Anders stiller spørgsmålet: Er der måske en menneskelig “X-faktor”, som gør, at vi altid vil have en menneskelig fordel, fremfor maskinerne?

——

Karina’s noter:

AI:

Not everytthing we call AI is that.

To qualify as AI, a system must exhibit some level of learning and adapting. For this reason, decision-making systems, automation, and statistics are not AI.

AI is broadly defined in two categories: artificial narrow intelligence (ANI) and artificial general intelligence (AGI). To date, AGI does not exist.

Most of what we know as AI today has narrow intelligence – where a particular system addresses a particular problem. Unlike human intelligence, such narrow AI intelligence is effective only in the area in which it has been trained: fraud detection, facial recognition, or social recommendations, for example.

Neural networks are inspired by the way human brains work. Unlike most machine learning models that run calculations on the training data, neural networks work by feeding each data point one by one through an interconnected network, each time adjusting the parameters.

As more and more data are fed through the network, the parameters stabilize; the final outcome is the "trained" neural network, which can then produce the desired output on new data – for example, recognizing whether an image contains a cat or a dog.

The significant leap forward in AI today is driven by technological improvements in the way we can train large neural networks, readjusting vast numbers of parameters in each run thanks to the capabilities of large cloud-computing infrastructures. For example, GPT-3 (the AI system that powers ChatGPT) is a large neural network with 175 billion parameters.

AI needs three things to be successful:

Data, computation, and algorithms form the foundation of the future of AI. All indicators are that rapid progress will be made in all three categories in the foreseeable future.

Data:

First, it needs high-quality, unbiased data, and lots of it. Researchers building neural networks use the large data sets that have come about as society has digitized.

Co-Pilot, for augmenting human programmers, draws its data from billions of lines of code shared on GitHub. ChatGPT and other large language models use the billions of websites and text documents stored online.

Text-to-image tools, such as Stable Diffusion, DALLE-2, and Midjourney, use image-text pairs from data sets such as LAION-5B. AI models will continue to evolve in sophistication and impact as we digitize more of our lives and provide them with alternative data sources, such as simulated data or data from game settings like Minecraft.

Hardware:

AI also needs computational infrastructure for effective training. As computers become more powerful, models that now require intensive efforts and large-scale computing may, in the near future, be handled locally. Stable Diffusion, for example, can already be run on local computers rather than cloud environments.

Algoritmer (rules):

The third need for AI is improved models and algorithms. Data-driven systems continue to make rapid progress in domain after domain once thought to be the territory of human cognition.

However, as the world around us constantly changes, AI systems need to be constantly retrained using new data. Without this crucial step, AI systems will produce answers that are factually incorrect or do not take into account new information that's emerged since they were trained.

https://www.sciencealert.com/not-everything-we-call-an-ai-is-actually-artificial-intelligence-heres-what-to-know