Data- och informationsteknik (CSE) // Computer Science and Engineering (CSE)
Använd denna länk för att länka till samlingen:
Vi utbildar för framtiden och skapar samhällsnytta genom vår forskning som levandegörs i nära samarbete med näringslivet. Vi bedriver forskning inom computer science, datateknik, software engineering och interaktionsdesign - från grundforskning till direkta tillämpningar. Institutionen har en stark internationell prägel och är delad mellan Chalmers och Göteborgs universitet.
För forskning och forskningspublikationer, se https://research.chalmers.se/organisation/data-och-informationsteknik/
We are engaged in research and education across the full spectrum of computer science, computer engineering, software engineering, and interaction design, from foundations to applications. We educate for the future, conduct research with high international visibility, and create societal benefits through close cooperation with businesses and industry. The department is joint between Chalmers and the University of Gothenburg.
Studying at the Department of Computer Science and Engineering at Chalmers
For research and research output, please visit https://research.chalmers.se/en/organization/computer-science-and-engineering/
Browse
Browsar Data- och informationsteknik (CSE) // Computer Science and Engineering (CSE) efter Program "Computer systems and networks (MPCSN), MSc"
Sökresultat per sida
Sortera efter
- Post3D Object Classification using Point Clouds and Deep Neural Network for Automotive Applications(2019) Larsson, Christian; Norén, Erik; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)Object identification is a central part of autonomous cars and there are many sensors to help with this. One such sensor is the LIDAR which creates point clouds of the cars surrounding. This thesis evaluates a solution for object identification in 3D point clouds with the help of a neural network. A system named DELIS (DEtection in Lidar Systems), which takes a point cloud generated from a LIDAR as input, is designed. The system consists of two subsystems, one non-machine learning algorithm which segments the point cloud into clusters, one for each object, and a neural network that classifies this clusters. The final output is then the classes and the coordinates of the objects in the point cloud. The result of this thesis is a system named DELIS that can identify between pedestrians, cars, and cyclists.
- PostA Comparative Study of Segmentation and Classification Methods for 3D Point Clouds(2016) NYGREN, PATRIK; Jasinski, Michael; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)Active Safety has become an important part of the current automotive industry due to its proven potential in making driving more joyful and reducing number of accidents and causalities. Different sensors are used in the active safety systems to perceive the environment and implement driver assistance and collision avoidance systems. Light detection and ranging (LIDAR) sensors are among the commonly utilized sensors in these systems; a LIDAR produces a point cloud from the surrounding and can be used to detect and classify objects such as cars, pedestrians, etc. In this thesis, we perform a comparative study where several methods to both segment Region Growing and Euclidian Clustering) and classify (Support Vector Machines, Feed Forward Neural Networks, Random Forests and K-Nearest Neighbors) point clouds from an urban environment are evaluated. Data from the KITTI database is used to validate the methods which are implemented using the PCL and Shark library. We evaluate the performance of the classification methods on two different sets of developed features. Our experiments show that the best accuracy can be obtained using SVMs, which is around 96.3% on the considered data set with 7 different classes of objects.
- PostA Decentralized Application for Verifying a Matching Algorithm - Programming and Testing a Smart Contract on the Ethereum Blockchain(2018) Fritz, Linnea; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)This thesis uses blockchain technology to construct a decentralized application (often called a ‘Ðapp’) for the sake of verifying results of a matching algorithm used on data in the automotive industry. Its main intent is to explore whether the framework Ethereum can be utilized to aid in ensuring the correctness of client responses to a query sent by a peer in the network. The application was programmed in Solidity and JavaScript, and run on a local test network consisting of five clients. Testing the finished application showed that the throughput of data was slow, approximately 35 bytes/s, and that taking over the network to send corrupted information was relatively simple. These findings, along with a general study of the areas where blockchain technology is most advantageous, led to the conclusion that though it has potential as a constituent in the car industry, it is not suitable for verification of matchings at the time of writing.
- PostA Distributed Publish/Subscribe System built on a DHT Substrate(2016) Laszlo, André; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)The publish/subscribe pattern is commonly found in messaging systems and message-oriented middleware. When large numbers of processes are publishing messages in applications where low latency and high throughput is needed, the performance of the messaging system is critical. Several solutions exist that provide high throughput and low latency to a high number of concurrent processes, such as RabbitMQ and Kafka. What happens to the performance of the system when each process also has a complex or large set of subscriptions? This is the case when users of an internet radio application notify each other of songs currently being played and the subscriptions of each user correspond to the user’s wish list – a list of songs that the user is interested in recording. This thesis examines how the popular messaging systems RabbitMQ and Kafka handle this situation when topic-based message filtering is used to model subscriptions. A prototype messaging system, Ingeborg, which is built on the key-value store Riak is also introduced and its performance is examined and compared to the previously mentioned systems. The results of the experimental study show that it is difficult to scale both RabbitMQ and Kafka with regards to the number of topics used, but that RabbitMQ shows greater flexibility in its configuration. Finally, the prototype system Ingeborg shows that it is possible to design a messaging system from scratch based on a key-value store, allowing even greater flexibility in prioritizing trade-offs and performance properties of a system.
- PostA Driving Assistance System with Hardware Acceleration(2015) Cui, Gongpei; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)Nowadays, active safety has become a hot research topic in vehicle industry. Active safety systems play an increasingly important role in warning drivers about and avoiding a collision or mitigating the consequences of the accident. The increased computational complexity requirement imposes a great challenge for the development of advanced active safety applications using the traditional Electronic Control Units (ECUs). One way to tackle this challenge is to use hardware offloading, which has the capability of exploiting massive parallelism and accelerating such applications. A hardware accelerator combined with software running on a general purpose processor can compose a hardware/software hybrid system. Model Based Development (MBD) is a common development scheme that reduces development time and time-to-market. In this project, we evaluate different MBD workflows for the hardware/software co-design and propose a general workflow for MATLAB/Simulink models. We investigate key techniques for hybrid system design and identify three factors to assist hardware/software partitioning. Moreover, several essential techniques for hardware logic implementation, such as pipelining, loop unrolling, and stream transmission, are analyzed based on system throughput and hardware resources. This project describes the workflow for hardware/software co-design based on MBD and finds methods to improve the system throughput combining hardware accelerators and software. Using the proposed profiling methods and partitioning roles, a matrix multiplication function is selected to be implemented by a hardware accelerator. Having optimized the hardware implementation scheme of the accelerator, a 5.4x speedup is achieved on a Zynq evaluation board.
- PostA Fault-tolerant Distributed Library for Embedded Real-time Systems(2020) Gudmandsen, Johanna; Hashem, Hashem; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Karlsson, Johan; Johansson, RogerA distributed embedded control system (DECS) may have functionality that is safety-critical and time-sensitive, meaning if these systems malfunction the consequences could be devastating. In order to meet these requirements, a system must fulfill real-time constraints and guarantee correct functionality even in the presence of faults. In this thesis we present a software library providing clock synchronization, realtime scheduling and fault-tolerant decision making. It is intended for use with DECS communicating via controller area network (CAN). To achieve fault-tolerant decision making, we propose an early-stopping fault-tolerance algorithm solving up to t faults in a system of 2t + 1 nodes. We further propose an adaptation of this algorithm to real-world applications where there may be an interval of correct values instead of one correct value, as assumed in the base solution. The result is a lightweight and efficient library. The clock synchronization requires one message and has a precision comparable to other known solutions, but is not fault-tolerant. The scheduler runs in O(n2) time and uses a non-preemptive ratemonotonic policy. It can handle up to 63 user-defined tasks, and has a worst-case task delay of 2.5 ms for the lowest-priority task in a system with 60 tasks, assuming a task execution time of 0. The drawback is its inability to handle mixed-criticality task sets. Our proposed algorithm utilizes the properties inherent in CAN to provide an efficient way to rectify faults in the value domain. Due to the early-stopping property of the algorithm, the bus utilization increases linearly with the number of faults. We conclude that while the library is practical and efficient, fault-tolerant clock synchronization and fault handling in the time domain are necessary improvements before the library can be used in production systems.
- PostA GUI for application design and performance reporting of data streaming applications(2021) Teodorsson, Rikard; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Duvignau, Romaric; Massimiliano Gulisano, VincenzoAs a result of more connected devices and users and applications moving to the cloud, every year the amount of data needed to be processed by different computer systems increases. The troubles with trying to store all this information, and a growing demand for real-time processing not possible with traditional batch processing, has led to an increase in the popularity of stream processing as a way of handling large amounts of data in a real-time fashion with low latency and high throughput. The number of Stream Processing Engines (SPEs) available for processing this streaming data is increasing and each has its own programming style and conventions, resulting in individually developed SPEs that differ in many aspects. In general, there is a lack of a common set of tools supporting developers to design, visualise and monitor streaming applications. For instance, a streaming application executed by an SPE can be modelled as a Directed Acyclic Graph (DAG), but not all SPEs have a tool to visualise this graph. This thesis concerns the design and implementation of a framework in the form of a Graphical User Interface (GUI) as a first step towards a unified view of streaming applications, aiding developers with visual representations, statistics, and code generation for different SPEs by abstracting away the implementation details. Using Java and JavaFX, a framework is developed and tested with two SPEs (Apache Flink and Liebre) and then evaluated in terms of performance, functionality, and generality. Presented is a functional framework allowing for a user to visualise DAGs of existing streaming applications, viewing live and offline statistics of executions, and the possibility to design a streaming application graphically and generating the corresponding Java code. The implementation is general and can be adapted to work with other popular SPEs. The resulting framework is a first step towards a unified view of streaming applications from different SPEs and adds a tool for developers to use, enabling for increased productivity and better understanding while working with both new and already existing streaming applications.
- PostA Peer-to-Peer Point of Sale System: A design of a ditributed system with peer-to-peer architecture to replace a solution based on a client-server model(2015) Peter, Joen; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)In this thesis we investigate an enterprise IT system based on a simple 3-layer client-server model, with clients connecting to local servers which are then in turn connecting to a central enterprise wide server. The system that is analyzed is a retail system, consisting of Point of Sale (POS) clients and one server in every shop to handle the clients. The local servers in turn communicate with a central server that keeps track of all retail transactions from all the shops, and all the reference data (articles to sell, prices, campaigns, etc) being sent to the shops. Because of cost and maintenance issues there is a demand for being able to deliver the service without the overhead for the local servers. Finding a way to eliminate the local server from the system is the purpose of this thesis. This is achieved by closely examining the data flow and functionality of the local server, thereby being able to suggest two different approaches to solve the problem using a peer-to-peer approach. The two different designs are compared and evaluated, showing the difference in the approaches. This result can also be used in a more generalized manner to look at removing layers of servers in similar systems.
- PostA Real-Time Testbed for Distributed Algorithms:Evaluation of Average Consensus inSimulated Vehicular Ad Hoc Networks(2017) Casparsson, Albin; Gardtman, David; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)Intelligent transportation systems consist of applications which use communication capabilities of vehicles to solve tasks that require cooperation with other vehicles. One of the possible applications is cooperative positioning, in which vehicles increase the accuracy of their positions by sharing positioning information with each other. Previous research has suggested using average consensus to share this information. Average consensus is a type of distributed algorithm that, in a system where each node performs a measurement of some value, can make all nodes reach agreement on the average of the set of measurements. This thesis evaluates the performance of average consensus algorithms in vehicular ad hoc networks. Full-scale experiments on vehicular systems are costly, but it is also not necessarily desired to fully simulate a vehicular system. This thesis presents a testbed where we opt to fully simulate the vehicular communication network. The vehicles that are part of the network can be simulated using either virtual nodes or a scaled down physical robot system. An 802.11p wireless network, which has been suggested for vehicular ad hoc networks, is simulated using the ns-3 network simulator. Additionally, some properties that cause the wireless network to be unreliable are simulated. Furthermore, in this thesis, three average consensus algorithms are implemented with some modifications to account for the properties of vehicular ad hoc networks. These algorithms are evaluated in the created testbed, in order to study their performance in such a network. We observe that consensus converges asymptotically in a simulation of randomly moving nodes, and that the consensus states of the nodes oscillate around the true average when new nodes are allowed to enter the system during consensus. The consensus converges to a state that does not necessarily coincide exactly with the true average, which is to be expected since some packets are lost due the simulated wireless network not being fully reliable. We also demonstrate that performing average consensus on the position of an object can improve the precision of driving in a physical system of moving robots.
- PostA Rust-based Runtime for the Internet of Things(2017) Adolfsson, Niklas; Nilsson, Fredrik; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)As the number of connected devices increases, known as the Internet of Things (IoT), reliability and security become more important. C and C++ are common programming languages for IoT devices, mostly due to their fine-grained memory management and low runtime overhead. As these languages are not memory-safe and memory management has proven to be difficult to handle correctly, problems such as buffer overflows and dangling pointers are likely to increase as the IoT devices become more prevalent. These problems can result in reliability and security issues. In contrast to C and C++, Rust is a memory-safe systems programming language that, just like C and C++, provides low runtime overhead and fine-grained memory management. Potentially, Rust can reduce the number of unreliable and insecure connected devices as a result of memory safety. Tock is an embedded operating system implemented in Rust, and thus it is inherently safe when it comes to memory management. In this thesis, we investigate if Rust is suitable to develop device drivers. We implement four different device drivers covering a range of functions and evaluate the energy consumption as well as the execution time of our drivers compared to other current state-of-the-art embedded operating systems. These device drivers enhance the usability of Tock for connected devices. With the device drivers, we show that Tock introduces an overhead in terms of execution speed but has a similar energy consumption in comparison to the current state-of-the-art embedded operating systems. Despite the increased runtime overhead, we argue that the benefits of Tock and Rust, i.e., increased reliability and security along with similar power consumption exceeds the negative aspect. Finally, we conclude that Rust and Tock are appropriate for developing device drivers for IoT devices.
- PostA scalable manycore simulator for the Epiphany architecture(2019) Jeppsson, Ola; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Sheeran, MaryThe core count of manycore processors increases at a rapid pace; chips with hundreds of cores are readily available, and thousands of cores on a single die have been demonstrated. A scalable model is needed to be able to effectively simulate this class of processors. We implement a parallel functional network-on-chip simulator for the Adapteva Epiphany architecture, which we integrate with an existing single-core simulator to create a manycore model. To verify the implementation, we run a set of example programs from the Epiphany SDK and the Epiphany toolchain test suite against the simulator. We run a parallel matrix multiplication program against the simulator spread across a varying number of networked computing nodes to verify the MPI implementation. Having a manycore simulator makes it possible to develop and optimize scalable applications even before the chips for which they are designed become available. The simulator can also be used for parameter selection when exploring richer hardware design spaces.
- PostA Social-Aware Federated Real-Time Scheduling Algorithm for Unrelated Multiprocessor Platforms(2022) Wilkins, David; Hammargren, Oskar; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Chalmers University of Technology / Department of Computer Science and Engineering; Jonsson, Jan; Pathan, RisatReal-time systems are commonly found in the modern world, ranging from aerospace control systems to health-care equipment. Real-time systems operate under strict timing constraints, meaning each program (i.e. task) must complete before a given deadline. Thus, a Real-time scheduling algorithm needs to schedule each task such that all deadlines are guaranteed to be met. Due to the sophistication of many modern real-time applications, the workload of real-time tasks are ever increasing. This creates a demand for multiprocessor platforms that can distribute the workload among several processors. Furthermore, many multiprocessor platforms are heterogeneous, meaning they include processors of different types that offers different capabilities to different task. This allows hardware to be specialized for different types of tasks. An example of such a platform is the ARM’s big.LITTLE architecture, which combines high-performance processing unit with power-efficient processors. However, scheduling real-time tasks on multiprocessors is a difficult problem. One approach to this problem is federated scheduling, which divides tasks into two categories, light or heavy. Light tasks can meet their deadline using only one processor, while heavy tasks need more than one processors to meet their deadline. Thus, federated scheduling assigns a cluster of processors to each heavy task. The light tasks are then assigned to the remaining processors. This assignment problem is an intractable problem since every possible task-to-processor assignment need to be considered in order to find the optimal solution. The current state-of-the-art in federated scheduling on heterogeneous platforms has a limitation. Namely, each task takes its preferred processors disregarding whether these processors were critical to other tasks. We fills this gap by providing a social-aware processor assignment algorithm. This algorithm gives each processor to the tasks that needs it the most. Our social-aware processor assignment algorithm is empirically evaluated through simulation. The performance of our algorithm is compared with the current state-of-the-art. The simulation show that our social-aware algorithm performs better in most cases.
- PostA Study of Concurrent Data Structures(2013) Li, Bo; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)This Master thesis studies four concurrent data structures but emphasizes on two concurrent tree data structures, in particular concurrent search trees. We have studied two concurrent search trees - concurrent AVL tree and concurrent counting-based tree (CBTree) and two concurrent queues - lock-free concurrent queue and two-lock concurrent queue. We implemented two variants of concurrent CBTree as well as the two concurrent queues. The optimistic concurrency control mechanism used in the concurrent tree data structures is called hand-over-hand optimistic validation. We further evaluated the implementations of the data structures coded in Java. The evaluations we done on an Intel workstation with Linux platform running 24 hardware threads. Furthermore, the advantages and drawbacks of these data structures are analyzed. Our study shows that CBTree should be implemented with some special mechanisms to achieve a better performance. Thus, this thesis achieves to present important explorations towards better implementation of concurrent search trees.
- PostA Taxonomy of Quantum Algorithms - The core ideas of existing quantum algorithms and their implications on cryptography(2018) Stigsson, Anders; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)With quantum computers coming up as one of the fastest growing areas in multiple research areas, such as computer science and physics, a taxonomy of the existing quantum algorithms is necessary. However, before this thesis, no taxonomy which included many of the existing quantum algorithms could be found. We have filled that gap with this thesis. The result is a taxonomy with 31 algorithms. Each algorithm are classified into different groups depending on the characteristics and the core idea that the algorithm uses. We have focused on three different core ideas distributed as 33% using the Quantum Fourier Transform, 27% uses Amplitude Amplification and 15% uses Quantum Walks with the remaining 25% being classified as "Other". On top of this, we also discuss the security implications on the cryptographic schemes used today, once quantum computers become reality. A taxonomy about an area that expands as fast as quantum computing is never finished, but we believe that this thesis provides a good base for future work in the area. This thesis can also be used as an introduction to quantum computing for students with a base knowledge about computer science and mathematics.
- PostA Version Oriented Parallel Asynchronous Evolution Strategy for Deep Learning(2021) JANG, MYEONG-JIN; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Papatriantafilou, Marina; Tsigas, PhilippasIn this work we propose a new parallel asynchronous Evolution Strategy (ES) that outperforms the existing ESs, including the canonical ES and steady-state ES. ES has been considered a competitive alternative solution for optimizing neural networks in deep reinforcement learning, instead of using an optimizer and a backpropagation function. In this thesis, three different ES systems were implemented to compare the performances of each ES implementation. Two ES systems were implemented based on existing ES systems, which are the canonical ES and steady-steady ES, respectively. Lastly, the last ES system is the proposed ES system called Version Oriented Parallel Asynchronous Evolution Strategy (VOPAES). The canonical ES replaces all population individuals at each generation, whereas the steady-state ES replaces only the weakest population with the newly created one. By replacing all population individuals, the canonical ES could optimize the network faster than the steady-state ES. However, it requires synchronization which might increase CPU idle time. On the contrary, a parallel steady-state ES does not require synchronization, but its learning speed could be slower than the parallel canonical ES one. Therefore, we suggest VOPAES as an advanced ES solution that takes the benefits of both the parallel canonical ES and the parallel steady-state ES system. The test results of this work demonstrated that the canonical ES system can be implemented asynchronously using versions. Moreover, by merging the benefits, VOPAES could decrease CPU idle time and maintain high optimization accuracy and speed as the parallel canonical ES system. In conclusion, VOPAES achieved the fastest training speed among the implemented ES systems.
- PostAbstraction Layers and Energy Efficiency in TockOS, a Rust-based Runtime for the Internet of Things(2018) Nilsson, Filip; Lund, Sebastian; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)The advent of the Internet of Things (IoT) has led to an increasing number of connected devices with the need to run several applications concurrently. This calls for an operating system with a complete network stack, customized for embedded systems with the requirements to be up and running for very long periods of time. In this thesis, we demonstrate how Tock, an operating system written in Rust, easily can be ported to a new hardware platform and provide similar results in terms of performance and energy-efficiency as other state-of-the-art operating systems for the IoT. Our thesis revolves around the CC26xx family of microcontrollers from Texas In- struments. These microcontrollers provide a wide range of features for power man- agement, such as peripheral clock management, and support for several different power modes. We show how software constructs can be used to facilitate the use of these power saving resources and decide what power mode to use depending on the workload. Besides comparing Tock with its competitors, we document the process of working with Rust in an embedded setting and research if Tock manages to leverage the features of Rust to its advantage through an adequate abstraction level.
- PostAdversarial Black-Box Attacks in the Domain of Device Fingerprints(2020) Andersson, Joel; Örtenberg, Gustav; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Dubhash, Devdatt; Schneider, GerardoNetwork security products incorporate many different tools in order to secure large networks. State-of-the-artproductsoftenutilizemachinelearninginordertoclassify devices connected to a network to assign them different levels of trust without the need for authentication. These zero-configuration security mechanisms work similarly to image classifying Deep Neural Networks and are of interest for big organizations where large amounts of devices come and go every day. However, solutions leveraging the power of machine learning also inherit its vulnerability to adversarial samples. Previous work has shown that even in query-limited blackbox scenarios, which is the most limiting for an attacker, image classifiers are vulnerable to adversarial attacks that make use of specially crafted input vectors [24]. This study shows that known attack techniques against image classifiers can be successfully reapplied to classifiers in the domain of device fingerprints in computer networks. We provide proof of concept that previously discovered adversarial sampling techniques are applicable in the domain of device fingerprints by attacking a well known commercial classifier. We show that across ten different devices on average 9.9% of the adversarial samples were successfully misclassified by the classifier. The most prominent of those devices had 36% of its adversarial samplesmisclassified. Theseresultspointtotheneedformoresophisticatedtraining algorithmsaswellastheimportanceofnotbuildingsolutionsthatbuildsontrusting device- or user-supplied data.
- PostAI/ML Algorithms for Video Data Filtration(2023) Amin, Siddharth; Atwine, Dean; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Chalmers University of Technology / Department of Computer Science and Engineering; Tsigas, Philippas; Ali-Eldin Hassan, AhmedVideo cameras are ubiquitous in today’s society, with cities and organizations steadily increasing the size and scope of their deployments. These applications have benefited from cloud computing’s large-scale computing and storage capabilities over the last two decades. The massive amounts of data generated by these high-definition cameras are proving too large to transport and process in real-time in the cloud. Many critical applications, such as public safety, surveillance, and traffic control, rely heavily on video cameras. Filtering out frames that do not contain relevant information for the query at hand is a common (and natural) strategy used by systems to improve efficiency. However, this necessitates that the filtering algorithm can contextually decide on if the frame is relevant or not. This research looks into the creation of a video analytics pipeline that uses computer vision tasks, object classification models, and a prioritization algorithm to effectively filter frames from multiple cameras while dealing with the over subscription of streams on a processing node and sending only relevant frames for further processing. In this thesis, we examine multiple light-weight computer vision and classification models that can be used to classify if a frame has a contextually interesting object. We then design a pipeline where we use techniques such as frame-differencing, light-weight Deep-Neural Networks(DNNs), and a frame prioritization algorithm to decide on which frames would be processed in the case of overprovisioning and in what order. Our results show that our framework can accommodate up to 85% more streams than running with out the framework.
- PostAlgorithms for Verifying Backwards Compatibility In Distributed Real-Time Systems(2016) Abdulwahhab, Husam; Smirnovs, Maksims; Chalmers tekniska högskola / Institutionen för data- och informationsteknik (Chalmers); Chalmers University of Technology / Department of Computer Science and Engineering (Chalmers)BACKGROUND: Backwards Compatibility is a key solution for companies that are attempting to reduce the cost and effort of introducing software updates to their customers. It is also an important property in order to gain the customer’s trust in accepting the updates without fearing side effects of some functionality not working properly. Therefore, it is important that the newly released software update is backwards compatible with an older version of the software of the same product. METHOD: In this paper, the main goal is to derive algorithms for verifying backwards compatibility and implement them. The results are obtained by following a research methodology that is based on the design research. Literature review was conducted in order to identify existing methods for verifying backwards compatibility. Algorithms for verifying backwards compatibility were designed, prototyped and tested on a distributed real-time system in order to evaluate their behaviour. The implemented prototypes of the algorithms can verify backwards compatibility in distributed real-time systems that pertain to telecommunications industry. RESULTS: Description of all the identified algorithms and strategies from academia and the industry, along with their classification into taxonomy are presented. Three algorithms that are designed by the authors which verify backwards compatibility in distributed real-time systems. The first algorithm is based on communication signals of a component during the execution of one of its tasks, the second algorithm focuses on the details of what the system is doing while executing the tasks, the third algorithm combines characteristics of the previous two algorithms. Prototypes of all the algorithm are developed and tested on various test scenarios of a software update. The combined results of the algorithms identified 8% backwards compatibility problems which are within the acceptable range when a software update is performed on the SGSN-MME product. All three algorithms provide details on what might be causing the incompatibility of the software update. CONCLUSION: Backwards compatibility is a hard problem to achieve especially in large and complex systems such as the one this study was based on. The algorithms that were identified in this study show a lot of promise in developing automated methods for verifying backwards compatibility. This is proven with the prototypes of the three algorithms that were developed over the study period. The work that was carried over this study shows that there is a number of open gaps to be studied in the future in order to achieve full scale autonomous algorithms for verifying backwards compatibility for various components of a system.
- PostAn experimental evaluation of image input degradation on machine learning performance(2020) Andersson, Oscar; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Hebig, Regina; Berger, ChristianSignificant advancements in the field of machine learning have been made during the past decade due to artificial neural networks being feasible to compute. To train these neural networks, a large amount of data is needed, especially in the field of autonomous driving where image data needs to be stored at a large scale. Such data can be reduced in size significantly by using lossy video compression at the cost of losing visual fidelity. This trade-off could potentially be balanced such that the data size of a dataset is reduced while the reduction in data quality is not significantly affected. This thesis aims to establish the effects of video compression on machine learning algorithms performing computer vision tasks, specifically for autonomous driving. This involved evaluating the effect of certain encoders and coding parameters on a set of ML-algorithms. Multi-objective optimisation was performed to find sets of optimum coding parameters for each encoder evaluated, referred to as coding parameter sweet spots. Experiments were conducted to measure the impacts of video compression, using the optimum coding parameters’, on machine learning algorithms. The experiment results indicated that some encoders’ sweet spots were able to compress the data without significantly altering ML-performance. Collected experiment data was also used to compare encoders capabilities and behaviour when compressing data for ML usage. Finally, suggestions for how practitioners should evaluate and validate lossy video compression methods were provided.