Students' Corner
The summer school will be accompanied by an exchange platform for participants, the Students' Corner, which will allow them to network and share their research. Furthermore, researchers from SFB 876 will also present their work there.

How does the Students' Corner work?
The Student's Corner is a poster session, accompanied by a long coffee break, designed to encourage participants to network and discuss their research. Participants will present their work on A0 posters and share their research with other attendees. Similar to poster presentations at a conference, this gives you the opportunity to informally talk with visitors about your research. Presenters will be asked to stand next to their posters to explain the poster and answer questions about its content.
After registering for the Summer School, you will be provided with a link to upload and (later) modify your contribution to the Student's Corner. The deadline for the final hand-in of your abstract/final modification is the 31st of July 2022.
As a service, we offer to print out our poster for you, given that you provide us with a PDF.
Presented Posters
Modeling homopolymer errors for the detection of eccDNA
The literature default pair hidden markov model for sequence alignment does not explicitly account for homopolymer errors, a class of errors prevalent in nanopore sequencing data. We modified the default pHMM to be more suitable for such data.
The resulting HomopolyPairHMM can then be used in the process of detecting extrachromosomal circular DNA in nanopore sequencing samples.
Interpolation of Instrument Response Functions
Contrary to well-known methods for the, even multi-dimensional, interpolation between data points, interpolation techniques between whole probability density functions (PDF) are rarely employed. One of the possible use-cases of these methods is the interpolation of Imaging Air Cherenkov Telescope's (IACTs) Instrument Response Functions (IRFs). IRFs, in this context, collect multiple information, partially in the form of PDFs. They are needed for the correct reconstruction of spectral and spatial information. As IRFs are constructed from resource-consuming Monte Carlo simulations for specific combinations of measurement conditions, IACTs would profit from alternative means of IRF computation to reduce the need for simulations. This poster contribution aims at showcasing existing methods, discussing the applicability to the specific use case of IRF interpolation and briefly introduce future enhancements.
Scalable Bayesian $p$-Generalized Probit and Logistic Regression via Coresets
The logit and probit link functions are arguably the two most common choices for binary regression models. Many studies have extended the choice of link functions to avoid possible misspecification and improve the model fit to the data. We introduce the $p$-generalized normal distribution into binary regression in a Bayesian framework. The $p$-generalized normal distribution has received considerable attention due to its flexibility in modeling the tails while generalizing, for instance, over the standard normal distribution where $p=2$ or the Laplace distribution where $p=1$. A scalable maximum likelihood estimation (MLE) method for $p$-generalized probit regression has been developed recently. Here we extend the estimation from MLE to Bayesian posterior estimates using Markov Chain Monte Carlo (MCMC) sampling for the model parameter beta and the link function parameter $p$. We use simulated and real-world data to verify the effect of different parameters $p$ on the estimation results and how logistic regression and probit regression can be incorporated into a broader framework. To make our Bayesian methods scalable in the case of large data, we also incorporate coresets as a means of reducing the data before performing the complex and time-consuming MCMC analysis. This allows us to perform very efficient calculations while retaining the original posterior parameter distributions up to little distortions in practice and with theoretical guarantees.
Privacy-Preserving Road Traffic Classification and Traffic Forecasting
The ongoing growth of road traffic inevitably raises environmental, economic, and health-related problems. Unfortunately, apparent measures like infrastructure extensions are usually not feasible due to regulatory or simply spacial limitations. Yet, the recent advances in research have accelerated the deployment of ITSs applying data-driven ML methods, able to tackle traffic-related challenges.
Therefore, this work aims to develop and harden a privacy-preserving detection and classification system utilizing high-precision radio fingerprinting for heterogeneous vehicular traffic flows in different environments (e.g., rural and urban scenarios). Finally, we further evaluate the use of actual vehicular traffic data for precise traffic forecasting, allowing to estimate future demands and, thus, helping to increase the efficiency of infrastructure resources and smart city applications.
A Parallel Framework for Approximate Max-Dicut in Partitionable Graphs
Computing a maximum cut in undirected and weighted graphs is a well studied problem and has many practical solutions that also scale well in shared memory (despite its NP-completeness). For its counterpart in directed graphs, however, we are not aware of practical solutions that also utilize parallelism. We engineer a framework that computes a high quality approximate cut in directed and weighted graphs by using a graph partitioning approach. The general idea is to partition a graph into k subgraphs using a parallel partitioning algorithm of our choice (the first ingredient of our framework). Then, for each subgraph in parallel, we compute a cut using any polynomial time approximation algorithm (the second ingredient). In a final step, we merge the locally computed solutions using a high-quality or exact parallel Max-Dicut algorithm (the third ingredient).
On graphs that can be partitioned well, the quality of the computed cut is significantly better than the best cut achieved by any linear time algorithm.
This is particularly relevant for large graphs, where linear time algorithms used to be the only feasible option.
Muon Deflection Simulation Using PROPOSAL
Incoming muons are reconstructed by large scale neutrino telescopes or by muon tomography. Since muons
are able to travel many kilo meters through media, they do up to thousands of interactions with a small deflection in each interaction. To estimate the total deflection of all interactions before the detector entry, the tool PROPOSAL is used to simulate the muon deflections for different energies. Thereby, a systematic uncertainty on the angular muon reconstruction can be estimated. Two comparisons with MUSIC and Geant4 and two data-MC agreements are performed and all are in good agreement. The muon deflection increases as the energy decreases and the muon deflection becomes relevant for current experiments for energies less than TeV.
Recent updates and applications of the lepton and photon propagator PROPOSAL
To be able to train and evaluate machine learning algorithms, Monte Carlo simulations are necessary. However, since large quantities of simulated data are often needed, the underlying tools needs to be both precise and efficient.
PROPOSAL is such a framework, providing simulations of charged leptons of photons.
In this contribution, the basic principles as well as the most recent updates of PROPOSAL are presented.
Furthermore, an overview of the current applications of PROPOSAL is given.
This includes its usage within the shower simulation framework CORSIKA 8, neutrino observatories such as IceCube and RNO-G as well as in the context of muography.
Computing sky maps using the open-source package Gammapy and MAGIC data in a standardized format
The open-source Python package Gammapy, developed for the high-level analysis of gamma-ray data, requires gamma-like event lists combined with corresponding instrument response functions. For morphological analysis, this data has to include a background acceptance model. Here we report an approach to generate such a model for the MAGIC telescope data, accounting for the azimuth and zenith dependencies of the MAGIC background acceptance. We validate this method using observations of the Crab Nebula with different offsets from the pointing position.
Note: These results were already shown at the 7th Heidelberg International Symposium on High-Energy Gamma-Ray Astronomy.
Proton Event Reconstruction for IACTs with Machine Learning Methods
Air showers induced by cosmic protons and heavier nuclei form the dominant background for very high energy gamma-ray observations with Imaging Air Cherenkov Telescopes. Even for strong very high energy gamma-ray sources the signal-to-background ratio in the raw data is typically less than 1:5000, so a very large statistic of cosmic proton and heavier nuclei induced events are available as a byproduct of gamma-ray source observations. In this contribution, we present the reconstruction of the particle type of primary events and the energy reconstruction of the events classified as protons. For this purpose, we used a random forest method trained and tested by using Monte Carlo simulations from the MAGIC telescopes, for energies above 70 GeV. We use the aict-tools framework, which includes machine learning methods for the particle type classification and energy reconstruction. The open-source Python project aict-tools was developed at TU Dortmund and its reconstruction tools are based on scikit-learn predictors. Finally, an unfolding taking into account the background is performed to compensate for the typical bias of the random forest results. Here we report on the performance of the proton event reconstruction using the well-tested and robust random forest approach.
Reconfigurable Intelligent Surface Deployment Study for Future Vehicular Communications
The Reconfigurable Intelligent Surface (RIS) technology is a promising candidate to facilitate the utilization of novel radio resources in the mmWave and THz domain. The proposed deployment strategies may especially benefit vehicular communications in urban environments. Based on our system architecture model, simulation studies reveal that RISs are capable of eliminating dark zones by severly reducing the path loss.
Benchmarking Deep Learning Workloads
Deep Learning has seen massive progress in accuracy, but this is paired with ever-increasing model sizes, hardware requirements and energy consumption. I will present our efforts and setup for benchmarking Deep Learning for academia and will discuss the (sustainable) future of Deep Learning in academia.
Shrub Ensembles for Online Classification
Online learning algorithms have become a ubiquitous tool in the machine learning toolbox and are frequently used in small, resource-constraint environments. Among the most successful online learning methods are Decision Tree (DT) ensembles. DT ensembles provide excellent performance while adapting to changes in the data, but they are not resource-efficient.
Incremental tree learners keep adding new nodes to the tree but never remove old ones increasing the memory consumption over time. Gradient-based tree learning, on the other hand, requires the computation of gradients over the entire tree which is costly for even moderately sized trees.
In this paper, we propose a novel memory-efficient online classification ensemble called shrub ensembles for resource-constraint systems. Our algorithm trains small to medium-sized decision trees on small windows and uses stochastic proximal gradient descent to learn the ensemble weights of these `shrubs'. We provide a theoretical analysis of our algorithm and include an extensive discussion on the behavior of our approach in the online setting. In a series of 2~959 experiments on 12 different datasets, we compare our method against 8 state-of-the-art methods. Our Shrub Ensembles retain an excellent performance even when only little memory is available. We show that SE offers a better accuracy-memory trade-off in 7 of 12 cases while having a statistically significant better performance than most other methods. Our implementation is available under \url{https://github.com/sbuschjaeger/se-online}.
Classification of toxicological compounds with high-dimensional gene expression data
We developed a classifier of compounds into toxic and non-toxic. For prediction, typically alert concentrations obtained from cell experiments are used. We show that adding alert concentrations estimated from high-dimensional gene expression measurements clearly improve the classification performance. We compared various approaches for reducing the high-dimensional information to low-dimensional summaries.
In addition, we show preliminary insights from a review that aims to investigate methods for assesing the similarity between data sets. The long-term goal for this project is to design appropriate simulation studies.
Detecting Extrachromosomal Circular DNA in Lung Cancer by Nanopore Sequencing
Lung cancer (LC) is the leading cause of cancer-related death, and five-year survival rates are below 20% due to advanced stage at diagnosis and therapy resistance. Therefore, early detection of tumor progression and resistance to therapy is an unmet medical need. Studies have shown that extrachromosomal circular DNA in tumor cells is a marker of aggressive LC. Thus, detection of such circular fragments is potentially useful for disease monitoring. Since nanopore sequencing processes long reads, it is an emerging technology potentially allowing the detection of circularized DNA with high sensitivity and specificity. We established a graph-based workflow for detecting circularized DNA in cancer cell lines in the current project.
Establishing patient-specific profiles of circular DNA could facilitate development of individual biomarkers for monitoring treatment efficacy and early detection of cancer cells.
Online Adaptive Multivariate Time Series Forecasting
Multivariate Time Series (MTS) involve multiple time series variables that are interdependent. The MTS follows two dimensions, namely spatial along the different variables composing the MTS and temporal.
Both, the complex and the time-evolving nature of MTS data make forecasting one of the most challenging tasks in time series analysis. Typical methods for MTS forecasting are designed to operate in a static manner in time or space without taking into account the evolution of spatio-temporal dependencies among data observations, which may be subject to significant changes. Moreover, it is generally accepted that none of these methods is universally valid for every application. Therefore, we propose an online adaptation of MTS forecasting by devising a fully automated framework for both adaptive input spatio-temporal variables and adequate forecasting model selection. The adaptation is performed in an informed manner following concept-drift detection in both spatio-temporal dependencies and model performance over time. In addition, a well-designed meta-learning scheme is used to automate the selection of appropriate dependence measures and the forecasting model. An extensive empirical study on several real-world datasets shows that our method achieves excellent or on-par results in comparison to the State-of-the-Art (SoA) approaches as well as several baselines.
Capacity Analysis and Optimization of IoT Networks in Unlicensed Spectrum
This contribution is about evaluation of resource constraints with respect to capacity and scalability bounds forced by communication protocol and regulatory requirements.
Further, it is shown how these limitations can be counteracted by using data-driven optimizations.
Clustering by Direct Optimization of the Medoid Silhouette
The evaluation of clustering results is difficult, highly dependent on the evaluated data set and the perspective of the beholder. There are many different clustering quality measures, which try to provide a general measure to validate clustering results. A very popular measure is the Silhouette. We discuss the efficient medoid-based variant of the Silhouette, perform a theoretical analysis of its properties, and provide two fast versions for the direct optimization. We combine ideas from the original Silhouette with the well-known PAM algorithm and its latest improvements FasterPAM. One of the versions guarantees equal results to the original variant and provides a run speedup of O(k^2). In experiments on real data with 30000 samples and k=100, we observed a 10464x speedup compared to the original PAMMEDSIL algorithm.
Evaluation of Network Constraints for Real-Time Remote Operation
Real-time remote operation is found in diverse sectors. Be it within modern industry, where remotely operated delivery chains are integrated into production processes, telemedicine, where it can assist to solve the problem of lacking medical infrastructure in rural regions, or, in the context of rescue robotic in which unmanned aerial vehicles (UAVs) and unmanned ground vehicles (UGVs) can be used to explore disaster areas without endangering human personnel.
The applications set different requirements in terms of ultra-low latency, high data rates or reliabilities to the networks which are often realized over 5G. But besides the question, which resources need to be provided, another issue comes in mind, which is: What are the effects if the resources are constrained and requirements are not met?
We define the combination of the objective quality of service (QoS) and the application-specific needs as the quality of experience (QoE) for which we propose an evaluation scheme to perform repeatedly trials throughout an use-case.
First results indicate the impact of latency and packet loss on teleoperated driving (ToD) in our case study. Gathering knowledge about different QoE levels allows low-cost field testing of machine learning (ML)-based prediction techniques and other intelligent algorithms in order to either enhance the provided QoS or to assist the operators to maintain a certain QoE.
We further give an outlook on testing on cyber-physical playgrounds, where real small-scaled vehicles are coupled with simulated networks and mobility in a digital twin.
Coresets for logistic regression
Coresets are one of the central methods to facilitate the analysis of large data sets. They are used too compress big data without losing too much information.
We present two methods, leverage score sampling and sketching, to construct a coreset for logistic regression time efficiently and in few passes over the data. We give theoretic bounds that show that our methods give coresets that need much less space than the original data if the data fulfills some niceness condition.
Further we give a short introduction into the VC-dimension which is used for proving the bounds for leverage score sampling.
On Projections to Linear Subspaces
The merit of projecting data onto linear subspaces is well known from, e.g., dimension reduction. One key aspect of subspace projections, the maximum preservation of variance (principal component analysis), has been thoroughly researched and the effect of random linear projections on measures such as intrinsic dimensionality still is an ongoing effort. We investigate the less explored depths of linear projections onto explicit subspaces of varying dimensionality and the expectations of variance that ensue. The result is a new family of bounds for Euclidean distances and inner products. We showcase the quality of these bounds as well as investigate the intimate relation to intrinsic dimensionality estimation.
Measurement of CP violation in $B^0\to\psi K_S^0$ decays
One of the main questions of humankind is, where we came from. The best theory for the origin of the universe is the big bang theory, according to which same amounts of matter and antimatter were created and which lead to the matter-dominated universe we observe today.
This matter-antimatter asymmetry cannot be explained with known mechanisms, but it is known that CP violation is a contributing factor. CP violation is a mechanism integrated into the Standard Model of particle physics. The total amount of CP violation is not sufficient to explain the observed matter-antimatter asymmetry. Therefore, precision tests of the parameters to describe CP violation are needed. One of the best known parameters is the CKM angle $\beta$, where the golden mode of measurement is $B^0 \to J\!/\psi K^0_S$ due to the dominant contributions of tree-level amplitudes. With new reconstruction types of the $K_S^0$ and the combination of different decay channels it is possible to increase the statistical sensitivity in the most precise measurement of this quantity to date.
In the poster the current status of the time-dependent $\sin(2\beta)$ measurement in the decays $B^0 \to J\!/\psi(\to \ell \ell)K^0_S(\to \pi^\pm\pi^\mp)$ with $\ell=e,\mu$ and $B^0 \to \psi(2S)(\to \mu^\pm \mu^\mp)K^0_S(\to \pi^\pm\pi^\mp)$ will be presented, where the full LHCb Run II dataset from 2015 to 2018 corresponding to 6$\mbox{\,fb}^{-1}$ is used.
GPU Efficiency through Intelligent Collocation
Deep Learning has surged in accuracy, accompanied by increasing model sizes, hardware requirements and energy consumption. GPUs are the primary accelerators for these emerging workloads, which suffer underutilization.
I will present our efforts to understand the issue, the contributing reasons, and potentially available collocation solutions—furthermore, the insights from our experiments and future steps.
Indirect dark matter search in galaxy clusters
One of the open questions of astronomy is the one about the existence and properties of dark matter.
In the context of gamma-ray astronomy, observations are performed to find a signal from the annihilation or decay of the hypothetical dark matter particle(s) via intermediate standard model particles.
In order for this signal to be visible over the background of gamma rays from other contributions, searches are being performed in sky-regions with low luminosity and high suspected dark matter densities.
The highest concentration of dark matter halos is expected to be in galaxy clusters, which makes them very promising targets for observations.
This contribution focuses on the calculation of such halos, the expected signal and discusses their detectability with current or next-generation Imaging Air Cherenkov Telescopes.
Machine Learning for Knowledge Acquisition in Astro-Particle Physics
This poster explores three fundamental machine learning tasks that emerge from the scientific process of knowledge acquisition in astro-particle physics. The first task is ordinal quantification a.k.a. unfolding, the task of estimating the prevalences of ordered classes in sets of unlabeled data. This task emerges from the need for testing the agreement between astro-physical theories and the class prevalences that a telescope observes. To this end, our work includes a unification of existing methods on quantification, a proposal for an alternative optimization process, and the development of regularization techniques that address ordinality in quantification problems. Second, we address learning under class-conditional label noise. More particularly, we focus on a novel setting, in which one of the class-wise noise rates is known and one is not. This setting emerges from a data acquisition protocol, through which astro-particle telescopes simultaneously observe a region of interest and several background regions. Third, we address active class selection, the task of actively finding those proportions of classes which optimize the classification performance. In astro-particle physics, this task emerges from the simulation, which can produce the training data in any desired class proportions. We address this setting with a certificate of model robustness, which declares a set of class proportions for which the simulation-trained model is accurate, and with an active strategy for class-conditional data acquisition. This strategy uniquely considers existing uncertainties about those class proportions that need to be handled during the deployment of the classifier.
The mmWave Channel as Enabler for Novel Cellular Sensing Services
The move of 5G to incorporate millimeter-wave (mmWave) frequencies is motivated by novel applications requiring higher data rates and the exponential growing mobile traffic which necessitate a scalable solution. However, beamforming antenna arrays are required to compensate the challenging mmWave propagation characteristics. The resulting directional communication leads to technical trade-offs in regards to beam management performance and efficiency, and high costs for hardware and dense network deployment. Network operators are therefore looking for incentives to make the signaling overheads and the infrastructure investments worthwhile. 5G positioning is expected to enable novel commercial and industrial high-accuracy services and will profit from the high mmWave bandwidths and the inherent angle information, among others. In this poster, we characterize the potential and challenges of 5G mmWave positioning. Further, in scope of the emerging 6G topic on the provisioning of broader sensing services, we designed and evaluated a new mmWave-based method for measuring of fine-grained 3D motions.
Automatized Analysis of MAGIC Sum-Trigger-II Pulsar Data
The MAGIC telescopes are a stereoscopic system of Imaging Air Cherenkov Telescopes used for gamma-ray detection in the GeV to TeV range. With the Sum-Trigger-II, low-energy data with a threshold as low as ~25 GeV can be recorded which enables the MAGIC telescopes to perform comparably low energetic analyses such as pulsar analyses.
However, this data requires a dedicated analysis with a complex structure which is time-consuming for a human analyzer. Further, the current standard pulsar tool for MAGIC is based on tempo2 which is poorly maintained, not optimized for managing gamma-ray data, and rather difficult to use.
As a consequence, the automatization of the analysis chain of Sum-Trigger-II data as well as the pulsar analysis are reasonable. This will enable the possibility to perform long-term pulsar analyses with a small amount of work.
Trustworthy quality inspection
"The core elements of a future legal framework for AI in Europe will create a unique 'ecosystem for trust'." This is the EU Commission's self-claim. For the technical concretization of the requirements, the EU Commission currently refers to harmonized norms and technical standards, which, however, are not yet available in large parts. The goal of my research is to increase the trustworthiness of AI-based quality inspection by developing a standards-compatible methodology to increase the acceptance of this innovative technology. This has the potential to enable the diffusion of AI-based quality assurance and increase the resource efficiency of physical processes along complex process chains. In this presentation, potentials for the trustworthiness dimension of transparency will be explored and demonstrated.
Some Representation Learning Tasks and the Inspection of Their Models
We look into learning tasks for situations where labels are not available and we look into methods for trustworthy machine learning.
- Retrieval of related formulas with graph networks
- Gamma-astronomy with noisy labels
- Explanations based on training logs
- Robustness based on distillation
Data Aggregation for Hierarchical Clustering
Hierarchical Agglomerative Clustering (HAC) is one of the earliest and most flexible clustering methods. It can use Various linkage strategies and distances.
Most algorithms for HAC operate on a distance matrix and therefore require quadratic memory.
The standard algorithm also has a cubic run-time to produce a hierarchy.
Both memory and run-time are especially problematic in the context of embedded or otherwise resource-constrained systems.
In this poster, we present how we utilize data aggregation with BETULA to make HAC viable to compute on systems with constrained resources with only minor losses on clustering quality.
Deep learning-based imaging in radio interferometry
The cleaning of data measured with radio interferometers is an essential task for the scientific use of radio interferometric images. Established methods are often time-consuming and require expert knowledge. To generate reproducible images on small time scales, we have developed a prototype deep learning-based reconstruction method. As radio interferometers sample an incomplete image of the sky in Fourier space, our method takes this information as input and restores the missing information using convolutional layers.
The architecture applied is inspired by super-resolution models that take advantage of residual learning. Simulated radio galaxies consisting of Gaussian components are used to train the deep learning model. The poster gives an introduction to the idea and the architecture as well as exemplary results which are evaluated using various measures.
Transforming PageRank into an Infinite-Depth Graph Neural Network
Graph neural networks have found great success for various graph-related tasks like node classification or edge prediction. Graph convolutions update the state of each node based on all adjacent states from the previous iteration. Various problems resulting from this were identified, most prevalently the issue known as over-smoothing. Node states tend to become more similar and usable information vanished from individual node states, leading to worse performance in downstream tasks. Our recent work uses the close connection to the PageRank algorithm and its variant personalized PageRank that solves a closely related issue. Similarly, we present a reformulation of graph neural networks that does not suffer from over-smoothing even in the limit of infinitely many layers. We prove the convergence of our infinitely deep version and present an approach that keeps memory complexity constant. We provide strong intuition why reusing the initial state at later stages of a convolutional network is beneficial. Our empirical results demonstrate that our approach is not only theoretically well-founded but backed up in practice.
Performance Benefits of NB-IoT Early Data Transmission in Large Scaled IoT Networks
In the future the growing number of IoT devices will lead to a massive number of users in communication networks, which will result in an increased competition between users on the available network resources, leading to higher latencies and power consumption of each individual user. Therefore, in our contribution we analyze the scalability boundaries of NB-IoT networks with different transmission modes, called standard transmissions, Cellular IoT Optimization, and Early Data Transmission, using a novel detailed implementation of NB-IoT in the ns-3 LTE simulation framework LENA. The results show that Early Data Transmission clearly outperforms NB-IoT standard transmissions and Cellular IoT Optimization by providing up to 4.1 times less latency and 1.6 times longer battery life, while only using one-fourth of the downlink spectrum. Further, a good scalability for up to 864,000 devices per day in a cell with an area of 4.91km², or 176,000 devices per day and km² for all NB-IoT standard transmission, Cellular IoT Optimization, and Early Data Transmission scenarios is given. It is shown that the scalability is limited by downlink spectrum capacity for non-Early Data Transmission scenarios and Random Access windows for all scenarios. In a second simulation run, the number of Random Access windows has been doubled for all scenarios, which led to an improved Packet Delivery Rate, lower power consumption in high-scaled scenarios, and fewer Random Access collisions. The analysis results show a great positive impact of Early Data Transmission on the overall performance and is highly recommended to be used by default.
Instance-aware multi-object self-supervision for monocular depth prediction
This work proposes a self-supervised monocular image-to-depth prediction framework that is trained with an end-to-end photometric loss that handles not only 6−DOF camera motion but also 6−DOF moving object instances. Self-supervision is performed by warping the images across a video sequence using depth and scene motion including object instances. One novelty of the proposed method is the use of the multi-head attention of the transformer network that matches moving objects across time and models their interaction and dynamics. This enables accurate and robust pose estimation for each object instance. Most image-to-depth predication frameworks make the assumption of rigid scenes, which largely degrades their performance with respect to dynamic objects. Only a few SOTA papers have accounted for dynamic objects. The proposed method is shown to outperform these methods on standard benchmarks and the impact of the dynamic motion on these benchmarks is exposed. Furthermore, the proposed image-to-depth prediction framework is also shown to be competitive with SOTA video-to-depth prediction frameworks