Read On ציבורי
[search 0]
עוד
Download the App!
show episodes
 
Keeping you up to date with the latest trends and best performing architectures in this fast evolving field in computer science. Selecting papers by comparative results, citations and influence we educate you on the latest research. Consider supporting us on Patreon.com/PapersRead for feedback and ideas.
  continue reading
 
A weekly show all about audiobooks recorded at the RNIB Talking Book studios. We talk to your favourite authors and narrators, along with reviews and news about new audiobooks. Presented and produced by Robert Kirkwood, you'll find a new episode here every Friday at 1pm plus bonus content such as longer uncut interviews and episodes of our occasional extra show, The Book Group. Talking Books is a free service from RNIB giving access to over 40,000 fiction and non fiction books for adults and ...
  continue reading
 
Left On Read is a podcast where I'll be talking all my bookish thoughts. Let's hang 🖤 • (Yes, I did rebrand the cover art. I wanted something simpler and less busy.) • https://instagram.com/ashleyyyreads
  continue reading
 
Artwork

1
We Read It One Night

Alison and Rachel

Unsubscribe
Unsubscribe
חודשי+
 
Welcome to We Read It One Night, the bookish comedy podcast where sisters Alison and Rachel introduce you to the next romance novel that’ll make you want to stay up all night reading. Subscribe! Follow! Rate! Review! Tell your friends about us! Instagram: @wereaditonenight Twitter: @wereaditpodcast Facebook: We Read It One Night TikTok: @wereaditonenight Email: wereaditonenight [at] gmail.com
  continue reading
 
On Read This One, we read all of our favorite children's books for you to follow along with before bed, in the car, or wherever! Its a perfect podcast for children learning to read. Have a favorite book you want us to read? Great! You can email us at readthisonepodcast@gmail.com to tell us what you want to hear. Let's discover new books together! Support this podcast: https://podcasters.spotify.com/pod/show/samuel289/support
  continue reading
 
Loading …
show series
 
We present InstantMesh, a feed-forward framework for instant 3D mesh generation from a single image, featuring state-of-the-art generation quality and significant training scalability. By synergizing the strengths of an off-the-shelf multiview diffusion model and a sparse-view reconstruction model based on the LRM architecture, InstantMesh is able …
  continue reading
 
The Year of the Dresden is finally here! Hannah and Laura are covering the first third of Jim Butcher's Storm Front and friends, this book is a romp. Laura also makes a bold claim about television shows that she watched growing up and gushes about a new favorite book. Hannah has delved into a lot of recommendations that were given to her by Laura a…
  continue reading
 
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 4:50 - Read It On Reddit 16:12 - Ask Reddit 21:59 - Today I Advice 28:40 - Shower Thoughts 34:22 - Podnapping: Mythbusters AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!על ידי Read It Podcasts
  continue reading
 
Read On this week features Chioma Okereke with her coming of age audio book Water Baby set in the floating slum of Makoko in Lagos. We hear about The Trees: A Novel by Percival Everett and find new books entering the RNIB Library.על ידי RNIB Connect Radio
  continue reading
 
Welcome to Episode 205! April is National Poetry Month and we are here for it. Emily is currently reading YOU ARE HERE: Poetry in the Natural World, a new anthology edited by Ada Limón, and Chris is reading BOATS FOR WOMEN by Sandra Yannone.Since our last episode, Chris finished listening to WAKE UP WITH PURPOSE! What I’ve Learned in my First Hundr…
  continue reading
 
We analyze how well pre-trained large language models (e.g., Llama2, GPT-4, Claude 3, etc) can do linear and non-linear regression when given in-context examples, without any additional training or gradient updates. Our findings reveal that several large language models (e.g., GPT-4, Claude 3) are able to perform regression tasks with a performance…
  continue reading
 
Hey pals! Today, we're reeled in by listener suggestion HOOK, LINE & SINKER by Tessa Bailey, the sister sequel to IT HAPPENED ONE SUMMER. Fox and Hannah take us on a steamy friends to lovers book filled with sea shanties, the Pacific Northwest (with accompanying Twilight vibes), and plenty of angst. Enjoy the show! Ep. 38 - It Happened One Summer b…
  continue reading
 
Researchers have made significant progress in automating the software development process in the past decades. Automated techniques for issue summarization, bug reproduction, fault localization, and program repair have been built to ease the workload of developers. Recent progress in Large Language Models (LLMs) has significantly impacted the devel…
  continue reading
 
Large language models (LLMs), exemplified by ChatGPT, have gained considerable attention for their excellent natural language processing capabilities. Nonetheless, these LLMs present many challenges, particularly in the realm of trustworthiness. Therefore, ensuring the trustworthiness of LLMs emerges as an important topic. This paper introduces Tru…
  continue reading
 
In this study, we propose AniPortrait, a novel framework for generating high-quality animation driven by audio and a reference portrait image. Our methodology is divided into two stages. Initially, we extract 3D intermediate representations from audio and project them into a sequence of 2D facial landmarks. Subsequently, we employ a robust diffusio…
  continue reading
 
Generating long-form 44.1kHz stereo audio from text prompts can be computationally demanding. Further, most previous works do not tackle that music and sound effects naturally vary in their duration. Our research focuses on the efficient generation of long-form, variable-length stereo music and sounds at 44.1kHz using text prompts with a generative…
  continue reading
 
Creating high-fidelity 3D head avatars has always been a research hotspot, but there remains a great challenge under lightweight sparse view setups. In this paper, we propose Gaussian Head Avatar represented by controllable 3D Gaussians for high-fidelity head avatar modeling. We optimize the neutral 3D Gaussians and a fully learned MLP-based deform…
  continue reading
 
Hannah and Laura had so much fun talking with the wonderful Dani Finn about queer romance as a genre, queer characters, and tropes! Dani shares about their writing journey, talks publishing and some of their favorite books featuring queer love. If you're looking to add to your TBR, this is the episode you need to listen to! Be sure to follow Dani a…
  continue reading
 
Parameter-efficient fine-tuning (PEFT) methods seek to adapt large models via updates to a small number of weights. However, much prior interpretability work has shown that representations encode rich semantic information, suggesting that editing representations might be a more powerful alternative. Here, we pursue this hypothesis by developing a f…
  continue reading
 
Large language models (LLMs) often generate content that contains factual errors when responding to fact-seeking prompts on open-ended topics. To benchmark a model's long-form factuality in open domains, we first use GPT-4 to generate LongFact, a prompt set comprising thousands of questions spanning 38 topics. We then propose that LLM agents can be…
  continue reading
 
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 1:59 - Read It On Reddit 22:28 - Ask Reddit 35:06 - Today I Advice 39:29 - Shower Thoughts 48:40 - Podnapping: r/trivia - Absurd American Trivia AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!על ידי Read It Podcasts
  continue reading
 
We present Jamba, a new base large language model based on a novel hybrid Transformer-Mamba mixture-of-experts (MoE) architecture. Specifically, Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both model families. MoE is added in some of these layers to increase model capacity while keeping active parameter usage …
  continue reading
 
Recently years have witnessed a rapid development of large language models (LLMs). Despite the strong ability in many language-understanding tasks, the heavy computational burden largely restricts the application of LLMs especially when one needs to deploy them onto edge devices. In this paper, we propose a quantization-aware low-rank adaptation (Q…
  continue reading
 
In today's show we're re-joined by Louise Hare as she tells us about the sequel to Miss Aldridge Regrets, Harlem After Midnight. Plus we delve into the archive to talk to Sara Collins about The Confessions of Frannie Langton, and find some brand new audiobooks that are also available from RNIB Talking Books.…
  continue reading
 
We present MegaBlocks, a system for efficient Mixture-of-Experts (MoE) training on GPUs. Our system is motivated by the limitations of current frameworks, which restrict the dynamic routing in MoE layers to satisfy the constraints of existing software and hardware. These formulations force a tradeoff between model quality and hardware efficiency, a…
  continue reading
 
We introduce VoiceCraft, a token infilling neural codec language model, that achieves state-of-the-art performance on both speech editing and zero-shot text-to-speech (TTS) on audiobooks, internet videos, and podcasts. VoiceCraft employs a Transformer decoder architecture and introduces a token rearrangement procedure that combines causal masking a…
  continue reading
 
Indiemission #5 is coming to a close, but not without an awesome author interview! This week Hannah and Laura are thrilled to have Bryan S. Glosemeyer on the pod to talk about reading, his writing journey, and of course, Before the Shattered Gates of Heaven. They chat about the book's characters, themes, inspirations for the series and what's to co…
  continue reading
 
This paper focuses on task-agnostic prompt compression for better generalizability and efficiency. Considering the redundancy in natural language, existing approaches compress prompts by removing tokens or lexical units according to their information entropy obtained from a causal language model such as LLaMa-7B. The challenge is that information e…
  continue reading
 
Happy Autism Acceptance Month!!! We're celebrating with another one of the books that made Alison realize she was autistic: ALWAYS ONLY YOU by Chloe Liese. And how better to celebrate this seminal book than with a seminal guest host? I'm joined by Elodie, aka the famous book club member who chose The Viscount Who Loved Me in March 2020, changed bot…
  continue reading
 
RED BUBBLE STORE: https://rdbl.co/2BXMEkq DISCORD: https://discord.com/invite/uWZkb2a 3:30 - Read It On Reddit 12:06 - Ask Reddit 23:26 - Today I Advice 35:28 - Shower Thoughts 45:00 - Podnapping: Kids Say The Darndest Things AMA - readitpodcast@gmail.com - Ask Us Anything! LET YOUR GUARD DOWN!על ידי Read It Podcasts
  continue reading
 
This week Robert Kirkwood talks to former military policeman and former Met Detective, Neil Lancaster about the latest DS Max Craigie Book, The Devil You Know and how his career influenced his writing. Plus we find some brand new books out now and entering the Talking Books library.על ידי RNIB Connect Radio
  continue reading
 
We present a novel application of evolutionary algorithms to automate the creation of powerful foundation models. While model merging has emerged as a promising approach for LLM development due to its cost-effectiveness, it currently relies on human intuition and domain knowledge, limiting its potential. Here, we propose an evolutionary approach th…
  continue reading
 
Loading …

מדריך עזר מהיר