Theses & Dissertations

Permanent URI for this collection

Browse

Recent Submissions

Now showing 1 - 5 of 179
  • Item
    Constructing Orientable Sequences Using Cycle Joining Algorithm
    (University of Guelph, ) Hazari, Mahmoud; Sawada, Joseph
    A de Bruijn sequence of order n is a cyclic sequence that stretches to a length of 2^n, where each string of length n is featured exactly once as a substring. Orientable sequences are a relative type of de Bruijn sequences that necessitate every substring to not only appear just once but also to be unique when interpreted from either left to right or right to left. Such sequences have applications in position-location sensing in robotic vision. Like de Bruijn sequences, orientable sequences can also be constructed through different methods. For example, they can be generated using greedy algorithms like Prefer-same and Prefer-opposite, or via Exhaustive search, Cycle joining, and more. The upper bound on the length of orientable sequences can be achieved through calculations, whereas the lower bound is a little more complex. While Dai et al. provided a construction method for the lower bound, it has not been implemented by other researchers so far. The aim of this research is to find a practical and easily implementable method for constructing the lower bound of orientable sequences. To achieve this, we implemented the Cycle Joining algorithm. This algorithm aims to identify strings with n−1 bits in common and subsequently link them together to create a longer sequence of the same order n.
  • Item
    Confession Networks: Boosting Accuracy and Improving Confidence in Classification and Semantic Segmentation
    (University of Guelph, ) Radpour, Sina; Bruce, Neil
    We present a novel method for measuring the confidence of neural networks in classification problems in this thesis. There are existing statistical approaches to measure neural network confidence for classification. However, in this thesis, we suggest a new loss function such that the neural network signals the amount of confidence it has for its prediction, independent of the prediction itself. The first objective of this thesis is to design an appropriate loss function to output a confidence measure along with classification scores for neural networks. A second goal is to examine whether such a loss function can improve network performance. There are many applications where a confidence measure is essential, including autonomous driving to ensure that the predictions relating to the area around the vehicle are correct or in important medical diagnostic decisions. We demonstrate that the proposed approach both improves prediction accuracy and also provides a valuable output for gauging the confidence of the prediction.
  • Item
    Learning to Code: An examination of Novice Programmer Profiles
    (University of Guelph, ) Fraser, James; McCuaig, Judi; Gillis, Dan
    Online learning environments have played a critical role at all levels of education. This abrupt technological shift has highlighted the importance of online education resources. Historically, introductory computer science has demonstrated high failure and drop-out rates. This research examines novice university programming students' effort levels throughout the semester. Our research developed the IFS (Immediate Feedback System), a supplementary online platform where students submit programming coursework snapshots. The educational platform monitors students' online actions, collects submissions, and provides assessment feedback. Additionally, students may complete psychological surveys and self-assessments to communicate their perspective of their course progression. Our research conducted a multi-semester study investigating introductory programming courses. Student profiles were developed to categorize usage patterns and course outcomes. The relationship between student profiles, at the session- and semester-level, were compared to their final grades. Finally, transitions between students' usage profiles identified changes in students' effort levels during their courses. Enter the abstract of the item.
  • Item
    Line Labelling of Polyhedral Scenes: Comparing Performance and Properties of Different Neural Architectures
    (University of Guelph, ) Guntamukkala, Jeetendra Viraj; Bruce, Neil
    The classical problem of line labelling involves classifying an object’s edges into three categories: convex, concave, and occluded. Understanding the properties of an object’s edges yields the capability to recognize and interact with the objects in a scene. However, to what extent neural networks can relate geometry across a scene to produce the correct labelling of edges is unclear. To investigate this, we introduce a dataset generator that, without any manual input, generates random two-dimensional polyhedral scenes and ground truth labels for the object’s edges. We then conduct a comprehensive benchmark of the performance of select Convolutional Neural Networks, Recurrent Neural Networks, and Vision Transformers performing semantic segmentation to classify edge categories. Results indicate that gradually upsampling the encoded features and using recurrent algorithms improve segmentation performance. Furthermore, we perform a series of data-driven experiments to investigate how various scene conditions can influence the performance of these neural networks.
  • Item
    Mini-Batch Alopex-b For Training Neural Networks
    (University of Guelph, ) Roy, Protim; Kremer, Stefan
    While gradient descent is the dominant approach to parameter adaption in neural networks, ALOPEX-B is an alternative to gradient-based methods. In this thesis, we present a version of ALOPEX-B that calculates the global loss of a neural network twice to make these weight perturbations. The results of the training procedure are shown in the XOR problem by plotting the functions calculated by each neuron of the 2-2-1 neural network. These functions are then compared to a network trained with gradient descent and Adam. The same algorithm is then used to train a logistic regressor in order to learn MNIST. Here we apply a mini-batch procedure and show that ALOPEX-B is able to have a lower loss in fewer epochs than stochastic gradient descent. We also provide a hybrid version of ALOPEX-B which calculates a gradient in order to set the direction of descent.