Loading…
strong>Cottonwood 2/3 [clear filter]
arrow_back View All Dates
Tuesday, September 17
 

09:00 MDT

Back to Basics: Function Call Resolution
Tuesday September 17, 2024 09:00 - 10:00 MDT
When a C++ compiler encounters an expression like f(x, y), it must consider several language mechanisms to decide which function f the program will call. These mechanisms include name lookup, overload resolution, default function arguments, and template processing. Having a firm understanding of these mechanisms and how they interact will help you write user-friendly interfaces for you and your team.

This session begins by reviewing each of these mechanisms individually. It then examines how those mechanisms interact, focusing on situations that are most likely to occur in practice. Some of the questions that we’ll consider are:


     
  • How does the compiler resolve calls on overloaded functions with implicit conversions on multiple arguments?

  •  
  • Why does the compiler apply implicit conversions when resolving calls to overloaded functions, but not when making calls to function templates?>/li>


After this session, you’ll have a clearer understanding of how the compiler makes sense out of your code. With this knowledge, you’ll find it easier to craft interfaces that are easy to use correctly and hard to use incorrectly. You’ll also be better able to steer the compiler in your intended direction when necessary.
Speakers
avatar for Ben Saks

Ben Saks

Chief Engineer, Ben Saks Consulting
Ben Saks is the chief engineer of Saks & Associates, which offers training and consulting in C and C++ and their use in developing embedded systems. Ben has represented Saks & Associates on the ISO C++ Standards committee as well as two of the committee’s study groups: SG14 (low-latency... Read More →
Tuesday September 17, 2024 09:00 - 10:00 MDT
Cottonwood 2/3

12:30 MDT

Case For Non-Moveable Types
Tuesday September 17, 2024 12:30 - 13:30 MDT
In this session we will look at what it means to be moved-from, what impact being moved-from can have on code correctness, and we'll consider if some types should simply just be non-moveable for the sake of safe, correct code!
Speakers
avatar for Jason Turner

Jason Turner

Sole Proprietor, Jason Turner
Jason is host of the YouTube channel C++Weekly, co-host emeritus of the podcast CppCast, author of C++ Best Practices, and author of the first casual puzzle books designed to teach C++ fundamentals while having fun!
Tuesday September 17, 2024 12:30 - 13:30 MDT
Cottonwood 2/3

14:00 MDT

Leveraging C++20/23 Features for Low Level Interactions
Tuesday September 17, 2024 14:00 - 15:00 MDT
Low level interactions are a core part of embedded implementations. All too often, C++ developers rely on C constructs and interactions due to prior biases around language support.  Herein we present effective leveraging of C++20 and C++23 constructs in an embedded driver code base. From using an existing C driver more effectively with modern C++ smart pointers to leveraging constexprs for bit and byte manipulation in the standard library, we will go over how you can stay on the cutting edge of the C++ language evolution in the embedded space.
Speakers
avatar for Jeffrey Erickson

Jeffrey Erickson

HW/SW Co-Design Architect, Altera, an Intel Company
Jeffrey E Erickson works in HW/SW Codesign Architecture at Altera, an Intel company. He holds a BS in Electrical and Computer Engineering from the University of Virginia and a doctorate from Rutgers University and UMDNJ. For 15 years he has worked in embedded systems development including... Read More →
Tuesday September 17, 2024 14:00 - 15:00 MDT
Cottonwood 2/3

15:15 MDT

High-performance, Parallel Computer Algebra in C++
Tuesday September 17, 2024 15:15 - 15:45 MDT
The jump from a theoretical algorithm to a high-performant real-world implementation is non-trivial. Factors that impact performance not reflected in an academic paper must be resolved in practice, including and not limited to cache misses, machine word size limits, parallelism overheads, and differing instruction speeds.

The Ontario Research Centre for Computer Algebra (ORCCA) performs fundamental research and development in computer algebra. Part of our work involves researching and implementing high-performance, parallel algorithms in C++. In this talk, we discuss a state-of-the-art algorithm for polynomial multiplication that beats leading computer algebra systems, such as Maple (our own library) and FLINT. We focus on our experience implementing the algorithm in CUDA C++, the challenges we faced, and our solutions.
Speakers
DT

David Tran

Software Engineer, Snowflake
David Tran is a researcher with the Ontario Research Center for Computer Algebra, developing novel, high-performance parallel algorithms in computer algebra. Previously, he worked with C++ as a software engineer at Apple and at Snowflake. He is in the final year of his joint Bachelor's... Read More →
Tuesday September 17, 2024 15:15 - 15:45 MDT
Cottonwood 2/3

15:50 MDT

Application of C++ in Computational Cancer Modeling
Tuesday September 17, 2024 15:50 - 16:20 MDT
Cancer research involves simulating "pseudo tumors" by sampling stochastic processes. Beginning with C++11, there were new features introduced in both the language and Standard Library that proved to be highly beneficial for simulating stochastic processes and analyzing the results. In addition, the linear algebra library Eigen, comprised of template code, makes it simple to perform matrix and vector operations in C++. Using an application in colorectal cancer research, the presentation will feature three topics in C++, namely random number generation, parallel computing, and leveraging the Eigen library.

One of the main attractions of Eigen is that it simplifies the code implementation for people who think mathematically, as the + and * operators are clearly defined in the mathematical sense of matrix addition and row-by-column matrix multiplication.  This talk highlights this aspect by demonstrating how to express the simulation of a stochastic cancer modeling process in C++, a problem that is well formulated in terms of matrix operations.

Simulating these stochastic processes also requires drawing random numbers. Before the new <random> capabilities in C++11, users typically needed to patch together their own uniform random number generators, as well as transformations to their desired distributions such as the Poisson and exponential distributions, which required a large amount of time for implementation and testing. This talk will show that by using methods such as std::discrete_distribution and std::exponential_distribution in <random>, constructing a random process simulator is technically simple in C++.

Cancers with seemingly similar initial conditions may develop into drastically different conditions. Much attention has been paid to events with high variability. To test a theoretical model, a huge number of simulations needs to be done to capture this variability. Therefore, concurrency is in reality a requirement rather than an option. In this application, parallel versions of certain STL algorithms introduced in C++17 are used to obtain descriptive statistics on the simulated data. In addition, an application of task-based concurrency will be used for replacing multithreaded computing formerly based on the pthread library.

In sum, this talk provides an example of using modern C++ in computational biology, with a goal of showing the C++ community that computational biology is a growing domain for applying modern C++ to general science.
Speakers
RZ

Ruibo Zhang

University of Washington
Ruibo Zhang is a second year Ph.D. student at University of Washington, department of applied mathematics. He has been working on applying probability theory in modeling cancer.
Tuesday September 17, 2024 15:50 - 16:20 MDT
Cottonwood 2/3

16:45 MDT

Vectorizing a CFD Code With `std::simd` Supplemented by (Almost) Transparent Loading and Storing
Tuesday September 17, 2024 16:45 - 17:45 MDT
Computational Fluid Dynamics (CFD) codes are ubiquitous in high performance computing. Their computational demands require the use of all levels of parallelism provided by the hardware. This includes the SIMD units of today's processors, which provide one level of data parallelism. With `std::simd`, it becomes possible to address these units directly from C++.

The talk reports on our work on the vectorization of a CFD code. The focus will be primarily on the way we have expressed vectorization using `std::experimental::simd` and less on the achieved performance gains. In this context, we have developed a library that complements `std::simd`. The goal of this library is to make loading and saving `std::simd` variables syntactically equivalent
to loading and saving their scalar counterparts. Loop bodies written for scalar variables can then be used for `std::simd` variables without modification. The talk also discusses some possible improvements to C++, since this goal can currently only be achieved by using a macro.
Speakers
avatar for Olaf Krzikalla

Olaf Krzikalla

Research assistant, DLR e.V.
Olaf Krzikalla started using C++ as a student at the Technical University of Dresden in the mid-90s. After that he worked as a software developer for several companies. During this time he also contributed the first version of `boost::intrusive`. In 2009 Olaf joined the HPC Center... Read More →
Tuesday September 17, 2024 16:45 - 17:45 MDT
Cottonwood 2/3
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.
Filtered by Date -