2024-Q4-AI-Business 9. Exam Questions

 

Circle one correct answer!

  1. Which statement is correct?

    1. Artificial intelligence nowadays is a complex computer program that mainly consists of programming rules

    2. Artificial intelligence nowadays is a mathematical model that mainly consists of mathematical equations

    3. Artificial intelligence nowadays is a complex computer program that mainly consists of expert knowledge

  2. What does artificial intelligence learn from?

    1. Expert-generated rules

    2. Data

    3. Programmer-created rules

  3. Which of the given examples could be input data in an artificial intelligence model?

    1. The probability that a client will refuse a service

    2. How many times a client has logged into the system in the last 10 days

    3. The values of the model's weights

  4. Which of the given examples could be output data in an artificial intelligence model?

    1. The probability that a client will refuse a service

    2. How many times a client has logged into the system in the last 10 days

    3. The values of the model's weights

  5. What type of model is required to predict the price of a product?

    1. Regression

    2. Classification

    3. Enumeration

  6. What type of model is required to predict whether a client will refuse a service?

    1. Regression

    2. Classification

    3. Enumeration

  7. In which environment are artificial intelligence models usually trained?

    1. MathLab

    2. Python

    3. Power BI

  8. What datasets are needed to train a model that could be used in production?

    1. Training set

    2. Test set

    3. Validation set

    4. Training, Test, Validation sets (Train, Test, Validation)

  9. Which factor most affects the model's accuracy?

    1. Learning rate

    2. Unbalanced sample count in each class in the training dataset

    3. Diversity of samples in the dataset

  10. For which application would artificial intelligence not be effective?

    1. Writing text advertisements

    2. Checking passwords and usernames during website authentication

    3. Creating coloring books for children

    4. Composing music

  11. For what purposes could clustering and categorization using the k-Means algorithm be useful?

    1. Classifying animal images

    2. Recommending new products to customers using their purchase history

    3. Generating text for advertisements

    4. Predicting the price of a new product that differs significantly from all existing products

  12. For what purposes could decision tree algorithms like ID3 be used?

    1. To determine the reasons why a client bought an existing product

    2. To determine whether a client will buy a new product

    3. To determine the probability that it will rain outside

    4. To determine the numerical value of a product's price

  13. How similar is an artificial deep neural network model to the human natural neural network model?

    1. Almost identical, as evidenced by large language models, image models, and other models

    2. Very similar, because it models biochemical processes in time

    3. Not similar, because the artificial neural network model is mathematical and operates very differently from the human natural neural network

  14. Which sequence of actions corresponds to the training of deep neural network models?

    1. Data normalization, Dataset splitting, Model creation, Loss function selection, Additional metric selection, Test loop, Validation loop, Epochs, Training loop, Backpropagation algorithm (SGD)

    2. Data normalization, Dataset splitting, Model creation, Epochs, Training loop, Backpropagation algorithm (SGD), Loss function selection, Additional metric selection, Test loop, Validation loop

    3. Data normalization, Dataset splitting, Model creation, Loss function selection, Additional metric selection, Epochs, Training loop, Backpropagation algorithm (SGD), Test loop, Validation loop

  15. What does an Epoch mean in the training process of artificial neural networks?

    1. All samples in the training set are reviewed, and there can be many epochs in one training process

    2. A data normalization method that removes extreme values

    3. All samples in the training set are reviewed, and there can only be one epoch in the training process

    4. Validation samples are reviewed after training

  16. Which component is the most important in ChatGPT prompt engineering to achieve a quality answer?

    1. Formulating the prompt as short and precise as possible

    2. Formulating the prompt as long and vague as possible

    3. Copying facts into the prompt

  17. What will happen if you continue asking several questions on different topics consecutively in the same ChatGPT session?

    1. The language model will start copying content from previous questions into later answers

    2. It will not affect the language model's performance

    3. The language model will become confused and not know what to answer

  18. What is the fastest way to create an advertising banner with the generative image model DALLE or other GenAI image generators tools?

    1. Upload an existing banner, ask it to describe it, prepare a detailed prompt, make changes, ensure the prompt is not longer than 500 words, and generate a new banner

    2. Describe the desired banner in 2000 words, listing image type, main subject, background, style, etc. in detail

  19. What type of input data is used in STT or ASR models?

    1. Audio signals

    2. Text data

    3. Image data

    4. Video data

  20. What input data should be used for a TTS model and what are the expected output data?

    1. Input data: text; Output data: audio signals

    2. Input data: images; Output data: text

    3. Input data: audio signals; Output data: text

    4. Input data: video; Output data: audio signals

  21. What tasks in audio processing can artificial intelligence perform?

    1. Voice and music style conversion – AI allows transforming the voice characteristics of one speaker or singer to another while preserving the content of speech or song. It is also possible to transfer music style from one genre or performer to another.

    2. Music generation – AI models can create new, original music by learning musical patterns from a large amount of music data.

    3. Accent removal – With voice conversion methods, it is possible to reduce or remove a speaker's accent, making speech easier to understand.

    4. All of the above

  22. What is a spectrogram, or MFCC, in the context of an audio signal?

    1. A spectrogram is a visual representation that shows the frequency distribution of an audio signal over time. It depicts sound intensity at different frequencies at each moment.

    2. A spectrogram is a curve that shows the amplitude of an audio signal over time. It does not represent frequency distribution.

    3. A spectrogram is a mathematical function used for filtering and transforming audio signals. It has no graphical representation.

    4. A spectrogram is a device used for generating and playing audio signals by adjusting frequency bands.

  23. Approximately how much error do modern speech recognition models have in English?

    1. Under 10% word error rate (WER)

    2. Under 20% word error rate (WER)

    3. Under 30% word error rate (WER)

  24. Where is voice recording enhancement (speech enhancement) with artificial intelligence useful?

    1. In forensic analysis and criminology

    2. In speech recognition and transcription

    3. In cases when a high-quality voice recording is available

  25. What is the purpose of using a loss function when training artificial intelligence?

    1. The loss function is used to evaluate the difference between the AI's predicted and actual values, allowing the model to adjust for better accuracy.

    2. The loss function is used to check the operation of AI hardware and identify possible errors.

    3. The loss function serves as a safety mechanism to prevent the AI from having too much autonomy and independent operation.

    4. The loss function is used to measure the speed and efficiency of AI operations, determining its performance.

  26. What is the purpose of using metrics such as F1 or Accuracy when training an artificial intelligence model?

    1. F1 and Accuracy are used to check the speed and efficiency of the AI model during training. The faster the model trains, the better these metrics are.

    2. F1 and Accuracy are necessary to adjust the AI model's hyperparameters and improve its architecture. Based on these metrics, one can understand how to change the model's structure.

    3. F1 and Accuracy are used to evaluate the AI model's performance and quality during training and testing. They help understand how well the model can classify or predict outcomes correctly.

    4. F1 and Accuracy serve as the main criteria to compare different AI models with each other and choose the best model for a given task. The higher these metrics, the better the model.

  27. A ConvNet or image classification model without data augmentation during training is capable of recognizing:

    1. Objects moved within the image

    2. Objects moved and rotated in the image

    3. Objects moved, scaled up, and rotated in the image

  28. The Transformer model, which underlies ChatGPT, is based on:

    1. On a text database

    2. An attention mechanism that pays attention to the input text

    3. Programming and statistial rules to find necessary text in database

  29. What is semantic segmentation in images?

    1. Semantic segmentation is a method of dividing an image into square segments without considering the content or meaning of the image.

    2. Semantic segmentation is an image processing method that allows determining the brightness and contrast of an image but does not provide information about its content.

    3. Semantic segmentation is the process in which each pixel of an image is assigned a semantic class, such as person, car, building, etc., thereby dividing the image into meaningful segments.

    4. Semantic segmentation is a way to convert a color image into a black-and-white image while preserving its semantic meaning.

  30. What is instance segmentation in images?

    1. Instance segmentation is an image processing method that divides an image into several segments based on pixel color values. This method is used to simplify the image and reduce its level of detail.

    2. Instance segmentation is the process of cutting out individual objects from an image and saving them as new images. This method is used to create a new image set from one larger image.

    3. Instance segmentation is the process in which individual objects in an image are identified and separated, assigning each object a unique label or identifier. This method allows precisely determining each object's location and boundaries in the image.

    4. Instance segmentation is a method that allows determining the depth of an image by analyzing the size and position of objects relative to each other. This information can be used to create a 3D model from a 2D image.

  31. How does Apple FaceID work, which uses a face re-identification model?

    1. The model is trained using the user's face data

    2. The model is already pretrained and obtains a unique embedding vector

    3. Both options

  32. What is OCR?

    1. OCR is an abbreviation for "Optimized Character Rotation," which is a deep learning method for rotating digits in images.

    2. OCR is Optical Character Recognition, which is a deep learning application that recognizes and digitizes printed or handwritten text from images or scanned documents.

    3. OCR is a deep learning algorithm used to forecast weather by analyzing cloud images from satellites.

  33. What parts should be included in the text prompt for Midjourney image generation?

    1. Configuration parameters

    2. Type, Subject, Features, Style

    3. Sample file

  34. What actions can be performed with AI image processing tools?

    1. Remove unwanted objects from the background of a photograph

    2. Change facial expressions and emotions in a photograph

    3. Both of the above actions

  35. What parts should be in a ChatGPT prompt to get the best result?

    1. Precisely defined task, Context/Persona, Format/Tone

    2. Imprecisely defined Task, Context/Persona, Format/Tone, Facts/Data

    3. It doesn't matter how structured the prompt is

  36. With what prompt can you achieve that the text is not recognized by plagiarism detection systems, but retains the same idea, using ChatGPT?

    1. Improve text below

    2. Rephrase text below

    3. Change text below

  37. Why use the paid version of ChatGPT?

    1. The paid version GPT-o1 gives 30-50% better results in various tasks

    2. Artificial intelligence runs on Nvidia GPUs, which are expensive to maintain and the free version is not economically viable, so the product quality is lower

    3. Both above options

  38. Which factors have contributed most to the development of artificial intelligence in the last 10 years?

    1. Business applications, data availability, computing resource power

    2. Public interest, business applications, computing resource power

    3. Data availability, computing resource power, mathematical theory

  39. Which jobs is artificial intelligence most likely to automate first in the digital environment?

    1. Monotonous, low-paid jobs

    2. Creative, high-paid jobs

    3. Artificial intelligence will not be able to automate intellectual work for a long time

  40. What do Large Language Models (LLM) resemble the most?

    1. An oracle that can answer all questions

    2. An improvisational theater that responds based on the information provided by the user

    3. An Internet search engine

  41. In what formats can GPT-4 respond?

    1. Write software source code

    2. Create tables

    3. Create numbered lists

    4. In all of the above formats

  42. To use a Large Language Model (LLM) most effectively with company data, what is needed:

    1. Train on company data

    2. Connect a text semantic meaning model and create a RAG (Retrieval Augmented Generation) system that uses a pretrained model

    3. Program the model to search for company data in the database by itself

  43. Where is PCA (Principal Component Analysis) useful?

    1. PCA is used to increase the dimensionality of input data to visualize data and see their relationships

    2. PCA is useful for reducing the dimensionality of input data, in order to visualize data and see their relationships

    3. None of the given options

  44. In what format do you encode categorical data in artificial intelligence to train a model, for example: bmw, audi, toyota?

    1. As class indices: 0 = bmw, 1 = audi, 2 = toyota

    2. One-hot-encoded: [1, 0, 0] = bmw, [0, 1, 0] = audi, [0, 0, 1] = toyota

    3. Both ways

  45. In what format should the input data be to train an AI model to predict apartment prices from advertisement text?

    1. Absolute values: 0…500k EUR

    2. Normalized price ranges: -1..1

    3. Both ways

  46. What can be the output data of a deep machine learning model?

    1. Car price

    2. Car brand

    3. Both of the above either together or separately

  47. What can be the input data of a deep machine learning model?

    1. Car price

    2. Car brand

    3. Both of the above either together or separately

  48. Comparing two groups based on survey respondents' answers, how can you determine if there are statistically significant differences between these groups' answers?

    1. Perform a t-test and the value must be below 0.05

    2. Perform a p-test and the value must be below 0.05

    3. Compare visually

  49. What are histograms?

    1. A histogram is a mathematical formula used to calculate the average value of a dataset, taking into account the deviation of the data from the average.

    2. A histogram is a type of data visualization in which data is represented as a line graph, where each data point is connected by a straight line to show changes over time.

    3. A histogram is a graphical representation that shows the distribution of data by dividing them into several intervals or bins and representing the frequency of each interval as the height of a bar.

    4. A histogram is a statistical method used to test whether the difference between two datasets is statistically significant, comparing their means and standard deviations.

  50. What is the backpropagation algorithm?

    1. The backpropagation algorithm is a data visualization technique that allows plotting data in three-dimensional space to better understand its structure and relationships.

    2. The backpropagation algorithm is a machine learning method used to train neural networks by propagating errors back through the network and adjusting weights to minimize the error.

    3. The backpropagation algorithm is a data compression method that reduces the volume of data, discards unnecessary information, and speeds up data processing.

    4. The backpropagation algorithm is a cryptographic method used to encrypt and decrypt data, making them more secure from unauthorized access.

  51. Null hypothesis models are most effectively used for:

    1. Speech recognition in Latvian

    2. Voice re-identification in banking systems

    3. Facial classification for emotion detection

  52. Reinforced machine learning, used in robotics and video game automation, consists of:

    1. Environment, Reward function, Decision tree

    2. Environment, Reward function, Observations, Actions, Agent

    3. Environment, Reward function, Observations, Actions, Classification accuracy evaluation, Agent