The Indy Autonomous Challenge Powered by Cisco (IAC) is the first autonomous racecar competition at the Indianapolis Motor Speedway and took place the 23rd of October. On that day, 9 teams from 21 universities competed to win the $1 million grand prize. The rules of the IAC required each team to compete in a fastest lap competition that included an obstacle avoidance component.

TII Euroracing took part in the challenge and represented our University in this international and innovative event. They placed second @IMS, and third both @CES22 and @TMS.

In this presentation, we will present their experience at the competition and will explain how to make a Dallara AV-21, based on the Indy Light chassis, drive by itself at over 270 kph and overtake another autonomous car at over 230 kph.

Fast accurate and robust localization of barcodes in images is essential to Datalogic core business. Hereto, this seminar will present a proprietary barcode-specific low-resolution feature extraction technique enabling fast barcode localization. The seminar will focus both the analytical derivation as well as the numerical and hardware-specific optimization steps required to make the solution deployable on embedded devices with real time constraints

ESAOTE-UNIGE

Medical ultrasound is the most widely used non-invasive real-time imaging system: it exploits the ability of human tissues to reect ultrasound signals. In detail, reected ultrasound can be processed to obtain images and quantitative measurements of the physical properties of tissues. If we refer to all the imaging modalities that a common ultrasound apparatus can handle, their properties change in accordance with numerous structural and non-structural factors of the machine. More in detail, the Point Spread Function (PSF) turns out to be space variant and dependent on the acquisition geometry and probe that is used.

In addition, for each mode and each probe there are many parameters that govern the goodness of the image (transmitted waveform, transmission frequency, number of active probe elements and so on). These parameters are historically chosen according to experience and must be selected each time the machine specifications change.

Is it then possible to develop one or more methods to make an automatic estimate of all the parameters involved for any given mode? And, in particular, can we hope to optimize the parameters so that the PSF is as uniform as possible? We have tried to answer these questions by formulating a model and an associated optimization problem.

Head of Visual Information Lab CINECA Interuniversity Consortium, Bologna

CEA - commissariat à l'énergie atomique et aux énergies alternatives

X-ray tomography (CT Scan) is a widely used method to inspect an object without damaging its structure (Non Destructive Testing). It allows the conformity of the object to be checked with respect to the intended dimensions, material composition, homogeneity, etc.

At the French Atomic Energy Commission (CEA), we are developing such a technique for different objectives and objects: verification of the conformity of nuclear waste drums (safety objectives), nuclear fuel (performance objectives) and metal additive manufacturing (cost objectives).

Nuclear waste drums are very large objects (more than one cubic metre and two tonnes), nuclear fuel is very dense and metallic additive manufacturing is an intermediate case.

For these three objects, the scanners are specific and rely on linear accelerators (high energy and dose rate) and thick scintillators. These components bring an intrinsic blur (Point Spread Function) which degrades the scanner results.

In order to correct this degradation and to improve the control capabilities, different PSF deconvolution methods are currently under study and will be presented. They can be applied on radiographs (projections with known Poissonian noise but low gradient) before the tomographic reconstruction process or directly on CT images (non-Poissonian noise and artefacts but with a high gradient).

*The two corrections can lead to different performances. Finally, if they effectively reduce the blur in the final CT images, they must also deal with the noise corruption that is always present.*

Università di Genova

We consider structured optimization problems defined in terms of the sum of a smooth and convex function and a proper, lower semicontinuous (l.s.c.), convex (typically nonsmooth) function in reflexive variable exponent Lebesgue spaces $L^p(cdot)$. Due to their intrinsic space-variant properties, such spaces can be naturally used as solution spaces and combined with space-variant functionals for the solution of ill-posed inverse problems. For this purpose, we propose a new proximal gradient algorithm in $L^p(cdot)$, where the proximal step, rather than depending on the natural (non-separable) $L^p(cdot)$-norm, is defined in terms of its modular function, which, thanks to its separability, allows for the efficient computation of algorithmic iterates. To highlight the effectiveness of the modeling, we report some numerical tests for the CT imaging application.

University of Helsinki

Dual-energy X-ray tomography is considered in a context where the target under imaging consists of two distinct materials. The materials are assumed to be possibly intertwined in space, but at any given location there is only one material present. Further, two X-ray energies are chosen so that there is a clear difference in the spectral dependence of the attenuation coefficients of the two materials.

A novel regularizer is presented for the inverse problem of reconstructing separate tomographic images for the two materials. A combination of two things, (a) non-negativity constraint, and (b) penalty term containing the inner product between the two material images, promotes the presence of at most one material in a given pixel. A preconditioned interior point method is derived for the minimization of the regularization functional.

Numerical tests with digital phantoms suggest that the new algorithm outperforms the baseline method, Joint Total Variation regularization, in terms of correctly material-characterized pixels. While the method is tested only in a two-dimensional setting with two materials and two energies, the approach readily generalizes to three dimensions and more materials. The number of materials just needs to match the number of energies used in imaging.