Loading…
Attending this event?
• Algorithms clear filter
Monday, September 16
 

14:00 MDT

So You Think You Can Hash
Monday September 16, 2024 14:00 - 15:00 MDT
Hashing is crucial for efficient data retrieval and storage. This presentation delves into computing hashes for aggregated user-defined types and experimenting with various hash algorithms. We will explore the essentials of hash functions and their properties, techniques for hashing complex user-defined types, and customizing std::hash for specialized needs. 
   Additionally, we (re)introduce a framework for experimenting with and benchmarking different hash algorithms. This will allow easy switching of hashing algorithms used by complex data structures, enabling easy comparisons. Hash algorithm designers can concentrate on designing better hash algorithms, with little worry about how these new algorithms can be incorporated into existing code. Type designers can create their hash support just once, without worrying about what hashing algorithm should be used. 
   You will gain practical insights and tools to implement, customize, and evaluate hash functions in C++, enhancing software performance and reliability.
Speakers
avatar for Victor Ciura

Victor Ciura

Principal Engineer, Microsoft
Victor Ciura is a Principal Engineer on the Visual C++ team, helping to improve the tools he’s been using for years. Leading engineering efforts across multiple teams working on making Visual Studio the best IDE for C++ Game developers.   Before joining Microsoft, he programmed... Read More →
Monday September 16, 2024 14:00 - 15:00 MDT
Cottonwood 8/9

14:00 MDT

When Lock-Free Still Isn't Enough: An Introduction to Wait-Free Programming and Concurrency Techniques
Monday September 16, 2024 14:00 - 15:00 MDT
If you've attended any talks about concurrency, you've no doubt heard the term "lock-free programming" or "lock-free algorithms". Usually these talks will give you a slide that explains vaguely what this means, but you accept that is is approximately (but not quite exactly) equal to "just don't use locks". More formally, lock-freedom is about guaranteeing how much progress your algorithm will make in a given time. Specifically, a lock-free algorithm will always make *some* progress on at least one operation/thread. It does not guarantee however that *all threads* make progress. In a lock-free algorithm, a particular operation can still be blocked for an arbitrary long time because of the actions of other contending threads. What can we do in situations where this is unacceptable, such as when we want to guarantee low latency for every operation on our data structure rather than just low average latency?

In these situations, there is a stronger progress guarantee that we can aim for called *wait-freedom*. An algorithm is wait free if *every* operation is guaranteed to make progress in a bounded amount of time, i.e., no thread can ever be blocked for an arbitrarily long time. This helps to guarantee low tail latency for all operations, rather than low average latency in which some operations are left behind. In this talk, we will give an introduction to designing and implementing wait-free algorithms.

Without assuming too much background of the audience, we will review the core ideas of lock-free programming and understand the classic techniques for transforming a blocking algorithm into a lock-free one. The main bread-and-butter technique for lock-free algorithms is the *compare-exchange loop* or "CAS loop", in which an operation reads the current state of the data structure, creates some sort of updated version, and then attempts to install the update via a compare-exchange, looping until it succeeds. compare-exchange loops suffer under high contention since the success of one operation will often cause another to have to repeat work until they succeed. The bread-and-butter technique of wait-free programming that overcomes this issue is *helping*. When operations contend, instead of racing to see who wins, an operation that encounters another already-in-progress operation attempts to help it complete first, then proceeds with its own operation. This results in the initial operation succeeding instead of being clobbered and forced to try again.
Speakers
avatar for Daniel Anderson

Daniel Anderson

Assistant Teaching Professor, Carnegie Mellon University
Daniel Anderson is an assistant teaching professor at Carnegie Mellon University, where he recently graduated with a PhD in computer science focusing on parallel computing and parallel algorithms. Daniel teaches algorithms classes to hundreds of undergraduate students and spends his... Read More →
Monday September 16, 2024 14:00 - 15:00 MDT
Adams A

16:45 MDT

Composing Ancient Mathematical Knowledge Into Powerful Bit-fiddling Techniques
Monday September 16, 2024 16:45 - 17:45 MDT
If you're interested in high-performance computing, this talk is for you! This talk aims to revolutionise how you think about the performance of associative operations.

Associative iteration is a powerful technique that allows for efficient computation of associative operations (addition, multiplication, and other monoids) in `O(log(N))` time as opposed to `O(N)` time.

The name for this technique was coined by my collaborators, and it is my goal to share their insights about it.

Associative Iteration is an absurdly general technique that leverages the properties of associativity that when combined with parallelism can achieve outstanding performance with very little effort.

We define Associative Iteration (unrelated to "iterators" as you might expect) as the repeated application of some associative operation with respect to a "count", some explicit number of operations. This technique applies generally to all monoids, so whether you're concatenating string, multiplying matrices, or simply adding integers, this talk will be useful to you!

Furthermore any "divide and conquer" algorithm that can be represented using this method should achieve near optimal or optimal performance.

This talk will specifically introduce a templated function that allows you to leverage the power of Associative Iteration in a way specific to parallelism.
Speakers
avatar for Jamie Pond

Jamie Pond

Lead Software Developer, mayk inc
Monday September 16, 2024 16:45 - 17:45 MDT
Cottonwood 8/9

16:45 MDT

Work Contracts – Rethinking Task Based Concurrency and Parallelism for Low Latency C++
Monday September 16, 2024 16:45 - 17:45 MDT
Task-based concurrency offers benefits in streamlining software and enhancing overall responsiveness. Numerous frameworks build upon this approach by furnishing abstractions that harness the capabilities of modern multi-core processors and distributed computing environments. These frameworks achieve this by facilitating the creation of task graphs, which simplify the management of task execution order and dependencies between tasks.

However, these frameworks are typically not well suited for use in low latency environments where every nanosecond matters, and any additional overhead introduced by managing task graphs can impact the system's ability to meet stringent latency requirements.

This presentation introduces Work Contracts, a novel approach specifically tailored for low-latency environments, which re-imagines both task-based concurrency as well as tasks themselves. We present an innovative lock free, often wait free, data structure designed to significantly enhance scalability in parallel task distribution, particularly under high contention.  Additionally, we introduce concepts which enhance tasks with internal state allowing for single threaded or parallel execution, recurring execution, and deterministic asynchronous task destruction. 

Finally, we examine usage cases for Work Contracts to showcase the advantages that this solution makes possible, resulting in cleaner, more manageable, and more scalable software which is well suited for low latency applications.
Speakers
avatar for Michael Maniscalco

Michael Maniscalco

Software Architect/Principal Developer, Lime Trading
Michael Maniscalco has been a professional C++ developer for over 25 years.  Initially he worked in the area of data compression (Intelligent Compression Technologies, later Viasat) and, for the last decade, he has been employed as a principal engineer and software architect at Lime... Read More →
Monday September 16, 2024 16:45 - 17:45 MDT
Adams A
 
Tuesday, September 17
 

14:00 MDT

C++26 Preview
Tuesday September 17, 2024 14:00 - 15:00 MDT
Join us as we explore the cutting-edge advancements of C++26, covering both small tweaks and large-scale additions. Despite C++23 not being officially released from ISO, the C++ committee is already two-thirds through the design phase of C++26 - with less than a year left until feature freeze. In this session, we'll dive into the new features slated for inclusion in C++26, with a focus on those already present in the working draft. And particularly those already making their way into compilers and standard libraries. For sure we'll cover contracts, reflection, format, and many smaller improvements. But given the large amount of material available, we'll customize the content based on audience requests.

We'll cover a variety of topics briefly, but with enough depth to get you started. From enhanced language capabilities to powerful library additions, this session will equip you with the knowledge to leverage these upcoming features in your projects.  The session will be up to date with the latest from the summer C++ Committee meeting.
Speakers
avatar for Jeff Garland

Jeff Garland

CrystalClear Software
Jeff Garland has worked on many large-scale, distributed software projects over the past 30+ years. The systems span many different domains including telephone switching, industrial process control, satellite ground control, ip-based communications, and financial systems. He has written... Read More →
Tuesday September 17, 2024 14:00 - 15:00 MDT
Maple 3/4/5

16:45 MDT

Taming the C++ Filter View
Tuesday September 17, 2024 16:45 - 17:45 MDT
C++20 introduced "views" as easy-to-use building blocks for processing the elements and values of containers and ranges.
The filter view is one of the key views, because filtering collections of data to process only elements that satisfy a specific constraint or requirement is one of the most important use cases of dealing with ranges and views.

Unfortunately, the filter view is also one of the most surprising C++ standard views. Even for simple use cases, you can easily get:
- Unexpected functional behavior
- Surprising compile-time errors with cryptic error messages
- Fatal runtime errors (without even noticing them)

There are reasons for the design of the filter view. For a successful filtering of elements you should know and understand the design and all of its consequences.

The talk will demonstrate all the issues with simple real-work examples and explain both the motivation and consequences if this design in practice. Listen and learn aspects you would not expect but have to know when using the filter views and views in general.
Speakers
avatar for Nicolai Josuttis

Nicolai Josuttis

IT Communication
Nicolai Josuttis (www.josuttis.com) is well-known in the community for his authoritative books and talks. For more than 20 years he has been a member of the C++ Standard Committee. He is the author of several worldwide best-sellers, including:- C++20: The Complete Guide- C++17: The... Read More →
Tuesday September 17, 2024 16:45 - 17:45 MDT
Maple 3/4/5
 
Wednesday, September 18
 

14:00 MDT

Designing a Slimmer Vector of Variants
Wednesday September 18, 2024 14:00 - 15:00 MDT
Heterogeneous containers ("vectors of variants") are an extremely flexible and useful abstraction across many data domains, but std::vector<std::variant<...>> can exhibit extremely bad memory characteristics for mixed types of disparate size, especially if the largest types are relatively uncommon in practice. Variants always have to be at least as large as their largest contained type, and vector implicitly requires all of its members to be the same size, leading to significant bloat in such cases. Motivated by real-world use-cases, this talk explores the design of a bit-packed replacement data structure that can achieve massive improvements in memory usage, and the impacts that these optimizations have on its API.
Speakers
CF

Christopher Fretz

Senior C++ Engineer, Bloomberg
Wednesday September 18, 2024 14:00 - 15:00 MDT
Spruce 3/4

15:15 MDT

The Beman Project: Bringing Standard Libraries to the Next Level
Wednesday September 18, 2024 15:15 - 16:15 MDT
This talk introduces the Beman Project, a new community initiative to improve the quality of C++ standard library enhancements. The project provides a) a simple way for engineers to evaluate standard library proposal implementations, b) a high quality template for standard library authors to share their implementations, and c) a modern platform for the wider C++ community to discuss standard proposals.

We'll discuss the problems the Beman Project is addressing, how it came about, and, most importantly, how to get involved. At the end of this talk you'll feel equipped to build and test proposed standard libraries, provide feedback, and, perhaps, contribute some implementations yourself!

The Beman Project is named in memory of Beman Dawes, the much loved co-founder of Boost.
Speakers
avatar for David Sankel

David Sankel

Principal Architect, Adobe
David Sankel is a Principal Scientist at Adobe and an active member of the C++ Standardization Committee. His experience spans microservice architectures, CAD/CAM, computer graphics, visual programming languages, web applications, computer vision, and cryptography. He is a frequent... Read More →
Wednesday September 18, 2024 15:15 - 16:15 MDT
Cottonwood 2/3
 
Friday, September 20
 

10:30 MDT

Interesting Upcoming Features from Low latency, Parallelism and Concurrency from Kona 2023, Tokyo 2024, and St. Louis 2024
Friday September 20, 2024 10:30 - 11:30 MDT
This talk will highlight the key discussions in ISO C++ parallelism and concurrency related proposals since the last CppCon discussed in the Kona 2023, Tokyo 2024, and St. Louis 2024 C++ standard meetings. We aim to update CppCon attendees every year in this area. We focus on features that are close to standardization, and/or appear to be relatively non-controversial so that you get a look ahead on what is coming for the next C++ release.

This talk, by the Concurrency TS2 Editors will describe all the features and show how they can be used as well as give their motivation, background and how they fit within the overall framework of C++ parallelism and concurrency.

1. Atomic
2. Hazard pointer extensions
3. Pointer Tagging
4. Parallel Algorithms, Parallel Range algorithms

There are other features being discussed at these meetings, but they are still in development and could still change. So we focus on those features that seem close to approval, interesting, and/or are relatively non-controversial. We will show the use cases of each and describe how some of them are already heading to C++26 or beyond. This will help programmers in concurrency, lock-free programming, low-latency applications how to take advantage of each of these important facilities.
Speakers
avatar for Paul E. McKenney

Paul E. McKenney

Software Engineer, Facebook
Paul E. McKenney has been coding for almost four decades, more than half of that on parallel hardware, where his work has earned him a reputation among some as a flaming heretic. Paul maintains the RCU implementation within the Linux kernel, where the variety of workloads present... Read More →
avatar for Maged Michael

Maged Michael

Staff Software Engineer, Monad Labs
Maged Michael is the inventor of several concurrent algorithms including hazard pointers, lock-free allocation, and multiple concurrent data structure algorithms. His code and algorithms are widely-used in standard libraries and production. His 2002 paper on hazard pointers received... Read More →
avatar for Michael Wong

Michael Wong

Distinguished Engineer, Codeplay
Michael Wong is Distinguished Engineer/VP of R&D at Codeplay Software. He is a current Director and VP of ISOCPP , and a senior member of the C++ Standards Committee with more then 15 years of experience. He chairs the WG21 SG5 Transactional Memory and SG14 Games Development/Low Latency/Financials... Read More →
Friday September 20, 2024 10:30 - 11:30 MDT
Maple 3/4/5
 
Share Modal

Share this link via

Or copy link

Filter sessions
Apply filters to sessions.