Subscribe
Sign in
Home
Archive
About
Decoding the Energy Footprint of AI
Why efficiency is the next frontier for tech advancements?
Aug 7, 2024
1
July 2024
How to make LLMs more memory-efficient
Beyond PEFT: from Mixture of Experts to Quantum Tensor Networks
Jul 3, 2024
•
Cristiano De Nobili
May 2024
The era of Artificial Collective Intelligence (ACI) is about to start
Ensemble Models & Multi-agent Collective Behaviours
May 9, 2024
•
Cristiano De Nobili
April 2024
Thermodynamic Computing
New hardwares for future AI algorithms
Apr 3, 2024
•
Cristiano De Nobili
February 2024
Tuning LLMs: a galaxy of endless possibilities
Shedding light on new techniques
Feb 14, 2024
•
Cristiano De Nobili
1
January 2024
Quantum Computing: enthusiasm and scepticism still coexist
Logical, Physical, Topological Qubits, and error-correcting codes towards 2024
Jan 3, 2024
•
Cristiano De Nobili
2
November 2023
How can Philosophy elevate Large Language Models?
LLMs can diffuse philosophy at scale, spreading best thinking practices everywhere
Nov 7, 2023
•
Cristiano De Nobili
1
August 2023
Emergent abilities of Large Language Models
Exploring unexpected behaviour in transformer-based deep learning models
Aug 8, 2023
•
Cristiano De Nobili
January 2023
Launching Turning bits into dreams
Turning bits into dreams is a newsletter about information-based technologies, from AI to Quantum Computing, whose aim is to make us dream of an…
Jan 12, 2023
•
Cristiano De Nobili
4
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts