Artwork

Player FM - Internet Radio Done Right
Checked 16h ago
הוסף לפני twenty-nine שבועות
תוכן מסופק על ידי Kabir. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Kabir או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
icon Daily Deals

Dev Ops Sec War Game the Microsoft Way!

11:37
 
שתפו
 

Manage episode 446787906 series 3605659
תוכן מסופק על ידי Kabir. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Kabir או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

This episode is about a process known as "war games," where teams are assigned red and blue roles to find code security risks before shipping the code. The red team simulates attacks to expose security vulnerabilities, while the blue team defends the systems, aiming to detect and mitigate these attacks. These exercises are designed to improve security practices and foster a security-conscious culture by simulating real-world scenarios and promoting continuous improvement.
The episode talks about practical guidance for implementing war games, including team organization, structured phases, rules of engagement, documentation processes, and critical lessons from Microsoft's experience. The overarching goal is to enhance system security and build a more resilient organization.

Send us a text

Support the show

Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.

  continue reading

245 פרקים

Artwork
iconשתפו
 
Manage episode 446787906 series 3605659
תוכן מסופק על ידי Kabir. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Kabir או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.

This episode is about a process known as "war games," where teams are assigned red and blue roles to find code security risks before shipping the code. The red team simulates attacks to expose security vulnerabilities, while the blue team defends the systems, aiming to detect and mitigate these attacks. These exercises are designed to improve security practices and foster a security-conscious culture by simulating real-world scenarios and promoting continuous improvement.
The episode talks about practical guidance for implementing war games, including team organization, structured phases, rules of engagement, documentation processes, and critical lessons from Microsoft's experience. The overarching goal is to enhance system security and build a more resilient organization.

Send us a text

Support the show

Podcast:
https://kabir.buzzsprout.com
YouTube:
https://www.youtube.com/@kabirtechdives
Please subscribe and share.

  continue reading

245 פרקים

Wszystkie odcinki

×
 
Google AI has developed DolphinGemma , a new AI model, to help scientists at the Wild Dolphin Project decode the complex communication of Atlantic spotted dolphins . Trained on decades of dolphin vocalization data, DolphinGemma identifies patterns and predicts sound sequences , potentially revealing the structure and meaning within their natural communication. Researchers are also using CHAT (Cetacean Hearing Augmentation Telemetry) , leveraging Google Pixel phones, to explore two-way communication by associating synthetic whistles with objects to establish a basic shared vocabulary with dolphins. The ultimate goal is to better understand and potentially interact with dolphins , and Google plans to share DolphinGemma with the wider research community. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
This episode is about Microsoft's April 2025 progress report on its Secure Future Initiative (SFI) , a comprehensive, multiyear effort to enhance the security of its products and services. The report highlights advancements across various security domains , including fostering a security-first culture, strengthening governance, and implementing core security principles like Secure by Design, Secure by Default, and Secure Operations . It details progress within six engineering pillars focused on crucial areas such as protecting identities and secrets, isolating production systems, securing networks, and improving threat detection and response. Furthermore, the report outlines efforts to increase transparency around security vulnerabilities and aligns its progress with recommendations from the Cyber Safety Review Board (CSRB). Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
This episode explores the current landscape and future trajectory of large language models (LLMs) and generative AI. One document details ten practical applications of LLMs in 2024, highlighting tools like ChatGPT and Grammarly, while another introduces DeepSeek-V3, a powerful open-source model focusing on its architecture and training innovations. A Deloitte report from January 2025 surveys enterprise adoption of generative AI, analyzing its business value and future expectations. Discussions from Reddit and an AI expert offer perspectives on why companies release open-source models, considering factors like competitive advantage and community contributions, alongside analyses of models like Meta's Llama 3 and Anthropic's Claude 3. Together, these texts provide a multifaceted overview of LLM technology, its implementation, and the strategic motivations behind its development and dissemination. keep Save to notecopy_all docs Add note audio_magic_eraser Audio Overview map Mind Map Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
This episode explores the burgeoning field of AI in video production, highlighting advancements like Runway Gen-4's precise camera controls and the emergence of powerful generative models such as OpenAI's Sora and the open-source Open-Sora project. These technologies are streamlining workflows in pre-production with tools like Storyboarder.ai and Filmustage, enhancing on-set filming and post-production editing, and even enabling the creation of entirely AI-generated videos. Researchers are also investigating AI's role in interactive narratives, using techniques to dynamically manage player experiences. While AI offers increased efficiency and creative possibilities, the sources also touch upon ethical considerations and the evolving role of human creativity in this transforming landscape. Send us a text Everyday AI: Your daily guide to grown with Generative AI Can't keep up with AI? We've got you. Everyday AI helps you keep up and get ahead. Listen on: Apple Podcasts Spotify Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
This episode introduce Manus AI , an autonomous AI agent from a Chinese startup, highlighting its ability to execute complex tasks with minimal user input, setting it apart from tools like ChatGPT and DeepSeek. Manus AI boasts multi-modal capabilities, advanced tool integration, and adaptive learning, achieving top performance in benchmarks for real-world problem-solving. The sources discuss Manus AI's potential to revolutionize productivity across various industries, offering benefits such as scalability, cost optimization through efficient resource management, and enhanced data privacy with self-hosting options. While currently invite-only, Manus AI is generating significant excitement for its capacity to act as a personal AI assistant that can plan, research, create, and even deploy projects independently, signaling a new era in AI-powered automation. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
Researchers introduced Test-Time Training (TTT) layers to enhance the ability of pre-trained Diffusion Transformers to generate longer, more complex videos from text. These novel layers, inspired by meta-learning, allow the model's hidden states to adapt during the video generation process. To validate their approach, they created a dataset of annotated Tom and Jerry cartoons for training and evaluation. Their model, incorporating TTT layers, outperformed existing methods in generating coherent, minute-long videos with multi-scene stories and dynamic motion, as judged by human evaluators. While promising, the generated videos still exhibit some artifacts, and the method's efficiency could be improved. The study demonstrates a step forward in creating longer, story-driven videos from textual descriptions. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
Yes, I can certainly provide a long and detailed elaboration on the topics covered in the sources, particularly focusing on LLM-generated text detection and the nature of LLMs themselves. The emergence of powerful Large Language Models (LLMs) has led to a significant increase in text generation capabilities, making it challenging to distinguish between human-written and machine-generated content. This has consequently created a pressing need for effective LLM-generated text detection . The necessity for this detection arises from several critical concerns, as outlined in the survey. These include the potential for misuse of LLMs in spreading disinformation, facilitating online fraudulent schemes, producing social media spam, and enabling academic dishonesty. Furthermore, LLMs can be susceptible to fabrications and reliance on outdated information, which can lead to the propagation of erroneous knowledge and the undermining of technical expertise. The increasing role of LLMs in data generation for AI research also raises concerns about the recursive use of LLM-generated text, potentially degrading the quality and diversity of future models. Therefore, the ability to discern LLM-generated text is crucial for maintaining trust in information, safeguarding various societal domains, and ensuring the integrity of AI research and development. To address this need, the survey provides clear definitions for human-written text and LLM-generated text . Human-written text is characterized as text crafted by individuals to express thoughts, emotions, and viewpoints, reflecting personal knowledge, cultural context, and emotional disposition. This includes a wide range of human expression, such as articles, poems, and reviews. In contrast, LLM-generated text is defined as cohesive, grammatically sound, and pertinent content produced by LLMs trained on extensive datasets using NLP techniques and machine learning methodologies. The quality and fidelity of this generated text are typically dependent on the model's scale and the diversity of its training data. Table 1 further illustrates the subtlety of distinguishing between these two types of text, noting that even when LLMs fabricate facts, the output often lacks intuitively discernible differences from human writing. The process by which LLMs generate text involves sequentially constructing the output, with the quality being intrinsically linked to the chosen decoding strategy . Given a prompt, the model calculates a probability distribution over its vocabulary, and the next word ($y_t$) is sampled from this distribution. The survey highlights several predominant decoding techniques: Greedy search selects the token with the highest probability at each step, which can lead to repetitive and less diverse text . Beam search considers multiple high-probability sequences (beams), potentially improving quality but still prone to repetition . Top-k sampling randomly samples from the top $k$ most probable tokens, increasing diversity but risking incoherence if less relevant tokens are included . Top-p sampling (nucleus sampling) dynamically selects a subset of tokens based on a cumulative probability threshold $p$, aiming for a balance between coherence and diversity . These decoding strategies demonstrate that LLM text generation is not a deterministic process but involves probabilistic selection and strategic c Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
This episode provides a comprehensive exploration into the realm of Artificial Intelligence (AI) and Machine Learning (ML), specifically within the context of educational environments. At its core, AI is defined as the simulation of human intelligence processes by machines, particularly computer systems [from prior conversation]. This encompasses various capabilities such as learning, reasoning, problem-solving, perception, and language understanding [from prior conversation]. Machine Learning, a significant subfield of AI, empowers computer systems with the ability to learn and improve their performance on a specific task over time without being explicitly programmed [from prior conversation]. This learning occurs through the analysis of data, allowing machines to identify patterns, make predictions, or make decisions [from prior conversation]. The discussion highlights the growing relevance and integration of AI technologies within K-12 and library education [from prior conversation]. Recognizing this shift, the Wisconsin Department of Public Instruction has developed guidance to support K-12 educators, librarians, students, and administrators in navigating and responsibly leveraging these powerful technologies [from prior conversation]. This guidance aims to foster a thoughtful and ethical approach to AI adoption in educational settings [from prior conversation]. Several key goals underpin this guidance. One crucial objective is the development of policies for the ethical use of AI [from prior conversation]. This involves considering the moral principles and systems that govern behavior in the context of AI applications. Ensuring the privacy of data is another paramount concern [from prior conversation]. As AI systems often rely on data, safeguarding personally identifiable information becomes critical. The guidance also strongly advocates for a human-centered approach to AI, encapsulated by the "H > AI > H" mnemonic [from prior conversation, 104]. This simple yet effective tool, borrowed from Washington State's AI guidance (2024), serves as a reminder that responsible AI use begins with a carefully crafted human prompt to elicit a relevant AI response. Subsequently, the information gathered from AI should be thoroughly examined by humans before being implemented in practice. This three-part concept emphasizes the crucial element of human oversight in all AI interactions. Furthermore, maintaining a human-centered approach is essential to ensure that AI serves as a complement to, rather than a replacement for, the interpersonal connections vital for social emotional learning (SEL) development. The episode further delves into practical applications of AI within education, showcasing projects designed to cultivate an understanding of AI concepts across diverse subject areas and grade levels [from prior conversation]. These hands-on AI projects aim to empower students to understand, use, and potentially even create AI technologies. Topics such as data bias, where AI systems can reflect and amplify biases present in the data they are trained on, are likely explored [from prior conversation]. Recommender systems, which utilize AI to suggest items or content based on user preferences, might also be examined [from prior conversation]. Beyond specific applications, the broader societal impact of AI is a significant area of focus, prompting students to consider the far-reaching consequences of these technologies [from prior conversation, 117]. Framewor Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
This episode explores the anticipated trajectory of artificial intelligence in 2025, highlighting key trends impacting various sectors. AI agents , capable of autonomous reasoning and action, are a prominent focus across multiple sources. Advancements in multimodal AI , processing diverse data types, and the growth of open-source and more minor, more efficient AI models are also emphasized. The increasing integration of AI into everyday applications and workplaces , alongside its potential to accelerate scientific research and transform industries like healthcare and supply chains, is frequently discussed. Furthermore, the significance of responsible AI development , including measurement, customization, and the need for broader regulation and ethical considerations , is underscored. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
OpenAI and Anthropic are actively competing to become the primary AI tool for college students. Both companies have recently unveiled initiatives aimed at higher education, with Anthropic introducing Claude for Education and OpenAI making ChatGPT Plus temporarily free for university students in the U.S. and Canada. Anthropic's approach includes "Learning mode" with Socratic questioning to foster critical thinking, while OpenAI highlights advanced features for academic tasks. This simultaneous push underscores the high value placed on college students as the next generation of AI users. Ultimately, this competition seeks to establish each company's AI as the default choice within academia. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
Dartmouth researchers conducted a clinical trial of their AI-powered therapy chatbot, Therabot, and found significant mental health improvements in participants with depression, anxiety, and eating disorder risks. The study showed symptom reductions comparable to traditional therapy, with participants reporting trust and connection with the AI. These findings suggest that AI therapy could increase access to mental health support, especially for those lacking regular care. Researchers emphasize that while promising, AI therapy requires clinician oversight to ensure safety and efficacy. The Therabot trial indicates the potential for AI to offer scalable and personalized mental health assistance. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
Microsoft is reportedly scaling back its ambitious AI data center expansion plans. This decision follows the emergence of new, more cost-effective AI model development methods, particularly from Chinese companies. These methods demonstrate that advanced AI can be achieved without the massive computing infrastructure initially anticipated. Consequently, Microsoft has paused or delayed several planned data center projects across multiple countries and U.S. states. This adjustment suggests a potential shift in the AI landscape, where expensive, large-scale data centers might not be the inevitable future. Microsoft's spokesperson acknowledged these changes as a demonstration of their strategy's flexibility in response to evolving AI demands. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
A team of AI researchers has developed a new open-source library to enhance the communication efficiency of Mixture-of-Experts (MoE) models in distributed GPU environments. This library focuses on improving performance and portability compared to existing methods by utilizing GPU-initiated communication and overlapping computation with network transfers. Their implementation achieves significantly faster communication speeds on both single and multi-node configurations while maintaining broad compatibility across different network hardware through the use of minimal NVSHMEM primitives. While not the absolute fastest in specialized scenarios, it presents a robust and flexible solution for deploying large-scale MoE models. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
Vana, a decentralized platform originating from an MIT project, aims to shift control of data used for AI training back to individual users. Frustrated by the current model where tech companies profit from user data, Vana allows individuals to upload their information and collectively decide how it's used to develop AI. Users who contribute data gain ownership stakes in the resulting AI models, receiving proportional rewards when those models are utilized. This approach fosters a user-owned network where individuals can pool their data, even across different platforms, to create more powerful and personalized AI applications while maintaining privacy. By enabling users to benefit from the AI they help create, Vana seeks to democratize AI development and break down the data silos of large tech companies. This innovative system has already attracted over a million users and facilitated the creation of numerous user-governed data pools for AI model training. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
In a January 2025 report, the U.S. Copyright Office addresses the copyrightability of works created using artificial intelligence. This second part of a broader study examines the level of human contribution necessary for AI-generated outputs to receive copyright protection in the United States. The report analyzes public feedback, legal precedents, and international approaches to conclude that current copyright law, requiring human authorship and original expression, can address AI-related issues without legislative changes. It clarifies that while AI can be a tool assisting human creativity, purely AI-generated content lacking sufficient human control or input is not copyrightable. However, humans can obtain copyright for their original contributions within AI-generated works, such as creative prompts, expressive inputs that are retained, and significant modifications or arrangements of AI outputs. The Copyright Office emphasizes that copyright aims to protect human creativity and will continue monitoring technological advancements. Send us a text Support the show Podcast: https://kabir.buzzsprout.com YouTube: https://www.youtube.com/@kabirtechdives Please subscribe and share.…
 
Loading …

ברוכים הבאים אל Player FM!

Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.

 

icon Daily Deals
icon Daily Deals
icon Daily Deals

מדריך עזר מהיר

האזן לתוכנית הזו בזמן שאתה חוקר
הפעלה