ICDE2024 Logo

Keynotes

Tuesday May 14th, 2024 @ 8:45
Keynote 1 [ In Theater 12, Chair: C. Mohan]
Intention is All We Need, To Make Databases Great for App Dev
by Juan Loaiza, (Executive Vice President, Oracle) and Tirthankar Lahiri (Senior Vice President of Data, Oracle).

 

Juan Loaiza is Executive Vice President of mission-critical database technologies at Oracle. He is responsible for leading product strategy and engineering for the world’s leading transaction processing and engineered systems technologies. Juan holds BS and MS degrees in computer science from the Massachusetts Institute of Technology.
Tirthankar Lahiri is Senior Vice President of Data, In-Memory and AI Vector technologies at Oracle. He is responsible for the Oracle Database Data Engine as well as the TimesTen In-Memory Database and Oracle NoSQLDB. He has a B.Tech in Computer Science from IIT, Kharagpur, and an MS in Electrical Engineering from Stanford University.

Wednesday May 15th, 2024 @ 8:30
Keynote 2 [ In Theater 12, Chair:TBD ]
How serious are we about green computing? The impact of data intensive computing
by Gustavo Alonso (Professor, ETH Zurich).

Many applications dominating the computing landscape are data intensive: data analytics, machine learning, large language models, recommendation systems, etc. The amount of data processed by these systems is staggering and continues to grow at an exponential rate. While the use of more and more data has led to impressive progress in many areas, it has an often ignored side effect: data movement is expensive, requires many resources, and it is often inefficiently managed. Any serious attempt at improving the sustainability and overall efficiency of data centers must necessarily include improvements in the way we handle and process data. In this talk I will show why existing systems are inherently inefficient in data movement, resource utilization, and processing requirements. I will then discuss potential solutions that take advantage of the trends towards specialization and the large economies of scale of the cloud, suggesting along the way how to design data centric architectures that are more energy and resource efficient than what we have today.

Gustavo Alonso is a professor in the Department of Computer Science of ETH Zurich where he is a member of the Systems Group (www.systems.ethz.ch) and the head of the Institute of Computing Platforms. He leads the AMD HACC (Heterogeneous Accelerated Compute Cluster) deployment at ETH (https://github.com/fpgasystems/hacc), with several hundred users worldwide, a research facility that supports exploring data center hardware-software co-design. His research interests include data management, cloud computing architecture, and building systems on modern hardware. Gustavo holds degrees in telecommunication from the Madrid Technical University and a MS and PhD in Computer Science from UC Santa Barbara. Previous to joining ETH, he was a research scientist at IBM Almaden in San Jose, California. Gustavo has received 4 Test-of-Time Awards for his research in databases, software runtimes, middleware, and mobile computing. He is an ACM Fellow, an IEEE Fellow, a Distinguished Alumnus of the Department of Computer Science of UC Santa Barbara, and has received the Lifetime Achievements Award from the European Chapter of ACM SIGOPS (EuroSys).

Thursday May 16th, 2024 @ 8:30
Keynote 3 [ In Theater 12, Chair: Yannis Velegrakis ]
AI Systems beyond Accelerating Linear Algebra
by Christos Kozyrakis (Stanford University).

Over the past decade, there has been remarkable progress in the co-design of hardware and software systems for artificial intelligence (AI). Much of this progress has focused on accelerating computationally-intensive operations, such as the matrix multiplications in AI training and inference tasks. This talk will address the broader systems issues that are now emerging as significant bottlenecks for AI workloads. We will review challenges such as making inference resource efficient, optimizing workloads involving multiple AI tasks, and feeding AI workloads with data. We will advocate for the design of AI infrastructure that looks more like the scale-out systems used for cloud computing, rather than the supercomputing systems used for HPC. Finally, we will underscore the need to broaden the scope of AI systems co-design to encompass the applications themselves.

Christos Kozyrakis is a Professor of Electrical Engineering and Computer Science at Stanford University. He is also the faculty director of the Stanford Platform Lab. Christos' research focuses on computer architecture and systems software. He is currently working on cloud computing technology, systems design for artificial intelligence, and artificial intelligence for systems design. Christos holds a BS degree from the University of Crete (Greece) and a PhD degree from the University of California at Berkeley (USA). He is a fellow of the ACM and the IEEE. He has received the ACM SIGARCH Maurice Wilkes Award, the ISCA Influential Paper Award, the ASPLOS Influential Paper Award the NSF Career Award, the Okawa Foundation Research Grant, and faculty awards by IBM, Microsoft, and Google. Christos has also worked for technology companies such as Google and Intel and has helped launch AI infrastructure startups such as Enfabrica and Plix.



Disclaimer: The Organizing Committee of a ICDE conference is not liable for any loss or damage arising from the activities of this particular conference as exercised by its agents: conference organizers, carriers, proceedings, publications and program committee.
© ICDE 2024