Player FM - Internet Radio Done Right
48 subscribers
Checked 1d ago
הוסף לפני four שנים
תוכן מסופק על ידי Anton Chuvakin. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Anton Chuvakin או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Player FM - אפליקציית פודקאסט
התחל במצב לא מקוון עם האפליקציה Player FM !
התחל במצב לא מקוון עם האפליקציה Player FM !
EP213 From Promise to Practice: LLMs for Anomaly Detection and Real-World Cloud Security
Manage episode 469435399 series 2892548
תוכן מסופק על ידי Anton Chuvakin. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Anton Chuvakin או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Guest:
- Yigael Berger, Head of AI, Sweet Security
Topic:
- Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
- I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be?
- Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale?
- SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
- We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
- What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
- So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?
Resource:
227 פרקים
Manage episode 469435399 series 2892548
תוכן מסופק על ידי Anton Chuvakin. כל תוכן הפודקאסטים כולל פרקים, גרפיקה ותיאורי פודקאסטים מועלים ומסופקים ישירות על ידי Anton Chuvakin או שותף פלטפורמת הפודקאסט שלהם. אם אתה מאמין שמישהו משתמש ביצירה שלך המוגנת בזכויות יוצרים ללא רשותך, אתה יכול לעקוב אחר התהליך המתואר כאן https://he.player.fm/legal.
Guest:
- Yigael Berger, Head of AI, Sweet Security
Topic:
- Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains?
- I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be?
- Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale?
- SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge?
- We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it?
- What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security?
- So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders?
Resource:
227 פרקים
כל הפרקים
×
1 EP226 AI Supply Chain Security: Old Lessons, New Poisons, and Agentic Dreams 24:39
24:39
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי24:39
Guest: Christine Sizemore , Cloud Security Architect, Google Cloud Topics: Can you describe the key components of an AI software supply chain, and how do they compare to those in a traditional software supply chain? I hope folks listening have heard past episodes where we talked about poisoning training data. What are the other interesting and unexpected security challenges and threats associated with the AI software supply chain? We like to say that history might not repeat itself but it does rhyme – what are the rhyming patterns in security practices people need to be aware of when it comes to securing their AI supply chains? We’ve talked a lot about technology and process–what are the organizational pitfalls to avoid when developing AI software? What organizational "smells" are associated with irresponsible AI development? We are all hearing about agentic security – so can we just ask the AI to secure itself? Top 3 things to do to secure AI software supply chain for a typical org? Resources: Video “Securing AI Supply Chain: Like Software, Only Not” blog (and paper) “Securing the AI software supply chain” webcast EP210 Cloud Security Surprises: Real Stories, Real Lessons, Real "Oh No!" Moments Protect AI issue database “Staying on top of AI Developments” “Office of the CISO 2024 Year in Review: AI Trust and Security” “Your Roadmap to Secure AI: A Recap” (2024) " RSA 2025: AI’s Promise vs. Security’s Past — A Reality Check" (references our "data as code" presentation)…

1 EP225 Cross-promotion: The Cyber-Savvy Boardroom Podcast: EP2 Christian Karam on the Use of AI 24:46
24:46
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי24:46
Hosts: David Homovich , Customer Advocacy Lead, Office of the CISO, Google Cloud Alicja Cade , Director, Office of the CISO, Google Cloud Guest: Christian Karam , Strategic Advisor and Investor Resources: EP2 Christian Karam on the Use of AI (as aired originally) The Cyber-Savvy Boardroom podcast site The Cyber-Savvy Boardroom podcast on Spotify The Cyber-Savvy Boardroom podcast on Apple Podcasts The Cyber-Savvy Boardroom podcast on YouTube Now hear this: A new podcast to help boards get cyber savvy (without the jargon) Board of Directors Insights Hub Guidance for Boards of Directors on How to Address AI Risk…

1 EP224 Protecting the Learning Machines: From AI Agents to Provenance in MLSecOps 30:40
30:40
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי30:40
Guest: Diana Kelley , CSO at Protect AI Topics: Can you explain the concept of "MLSecOps" as an analogy with DevSecOps, with 'Dev' replaced by 'ML'? This has nothing to do with SecOps, right? What are the most critical steps a CISO should prioritize when implementing MLSecOps within their organization? What gets better when you do it? How do we adapt traditional security testing, like vulnerability scanning, SAST, and DAST, to effectively assess the security of machine learning models? Can we? In the context of AI supply chain security, what is the essential role of third-party assessments, particularly regarding data provenance? How can organizations balance the need for security logging in AI systems with the imperative to protect privacy and sensitive data? Do we need to decouple security from safety or privacy? What are the primary security risks associated with overprivileged AI agents, and how can organizations mitigate these risks? Top differences between LLM/chatbot AI security vs AI agent security? Resources: “Airline held liable for its chatbot giving passenger bad advice - what this means for travellers” “ChatGPT Spit Out Sensitive Data When Told to Repeat ‘Poem’ Forever” Secure by Design for AI by Protect AI “Securing AI Supply Chain: Like Software, Only Not” OWASP Top 10 for Large Language Model Applications OWASP Top 10 for AI Agents (draft) MITRE ATLAS “Demystifying AI Security: New Paper on Real-World SAIF Applications” (and paper ) LinkedIn Course: Security Risks in AI and ML: Categorizing Attacks and Failure Modes…

1 EP223 AI Addressable, Not AI Solvable: Reflections from RSA 2025 31:37
31:37
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי31:37
Guests: no guests, just us in the studio Topics: At RSA 2025, did we see solid, measurably better outcomes from AI use in security, or mostly just "sizzle" and good ideas with potential? Are the promises of an "AI SOC" repeating the mistakes seen with SOAR in previous years regarding fully automated security operations? Does "AI SOC" work according to RSA floor? How realistic is the vision expressed by some [yes, really!] that AI progress could lead to technical teams, including IT and security, shrinking dramatically or even to zero in a few years? Why do companies continue to rely on decades-old or “non-leading” security technologies, and what role does the concept of a "organizational change budget" play in this inertia? Is being "AI Native" fundamentally better for security technologies compared to adding AI capabilities to existing platforms, or is the jury still out? Got "an AI-native SIEM"? Be ready to explain how is yours better! Resources: EP172 RSA 2024: Separating AI Signal from Noise, SecOps Evolves, XDR Declines? EP119 RSA 2023 - What We Saw, What We Learned, and What We're Excited About EP70 Special - RSA 2022 Reflections - Securing the Past vs Securing the Future RSA (“RSAI”) Conference 2024 Powered by AI with AI on Top — AI Edition (Hey AI, Is This Enough AI?) [Anton’s RSA 2024 recap blog] New Paper: “Future of the SOC: Evolution or Optimization — Choose Your Path” (Paper 4 of 4.5) [talks about the change budget discussed]…

1 EP222 From Post-IR Lessons to Proactive Security: Deconstructing Mandiant M-Trends 35:19
35:19
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי35:19
Guests: Kirstie Failey @ Google Threat Intelligence Group Scott Runnels @ Mandiant Incident Response Topics: What is the hardest thing about turning distinct incident reports into a fun to read and useful report like M-Trends ? How much are the lessons and recommendations skewed by the fact that they are all “post-IR” stories? Are “IR-derived” security lessons the best way to improve security? Isn’t this a bit like learning how to build safely from fires vs learning safety engineering? The report implies that F500 companies suffer from certain security issues despite their resources, does this automatically mean that smaller companies suffer from the same but more? "Dwell time" metrics sound obvious, but is there magic behind how this is done? Sometimes “dwell tie going down” is not automatically the defender’s win, right? What is the expected minimum dwell time? If “it depends”, then what does it depend on? Impactful outliers vs general trends (“by the numbers”), what teaches us more about security? Why do we seem to repeat the mistakes so much in security? Do we think it is useful to give the same advice repeatedly if the data implies that it is correct advice but people clearly do not do it? Resources: M-Trends 2025 report Mandiant Attack Lifecycle EP205 Cybersecurity Forecast 2025: Beyond the Hype and into the Reality EP147 Special: 2024 Security Forecast Report…

1 EP221 Special - Semi-Live from Google Cloud Next 2025: AI, Agents, Security ... Cloud? 30:26
30:26
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי30:26
Guests: No guests [Tim in Vegas and Anton remote] Topics: So, another Next is done. Beyond the usual Vegas chaos, what was the overarching security theme or vibe you [Tim] felt dominated the conference this year? Thinking back to Next '24, what felt genuinely different this year versus just the next iteration of last year's trends? Last year, we pondered the 'Cloud Island' vs. 'Cloud Peninsula'. Based on Next 2025, is cloud security becoming more integrated with general cyber security, or is it still its own distinct domain? What wider trends did you observe, perhaps from the expo floor buzz or partner announcements, that security folks should be aware of? What was the biggest surprise for you at Next 2025? Something you absolutely didn't see coming? Putting on your prediction hats (however reluctantly): based on Next 2025, what do you foresee as the major cloud security focus or challenge for the industry in the next 12 months? If a busy podcast listener listening could only take one key message or action item away from everything announced and discussed at Next 2025, what should it be? Resources: EP169 Google Cloud Next 2024 Recap: Is Cloud an Island, So Much AI, Bots in SecOps…

1 EP220 Big Rewards for Cloud Security: Exploring the Google VRP 29:13
29:13
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי29:13
Guests: Michael Cote , Cloud VRP Lead, Google Cloud Aadarsh Karumathil , Security Engineer, Google Cloud Topics: Vulnerability response at cloud-scale sounds very hard! How do you triage vulnerability reports and make sure we’re addressing the right ones in the underlying cloud infrastructure? How do you determine how much to pay for each vulnerability? What is the largest reward we paid? What was it for? What products get the most submissions? Is this driven by the actual product security or by trends and fashions like AI? What are the most likely rejection reasons? What makes for a very good - and exceptional? - vulnerability report? We hear we pay more for “exceptional” reports, what does it mean? In college Tim had a roommate who would take us out drinking on his Google web app vulnerability rewards. Do we have something similar for people reporting vulnerabilities in our cloud infrastructure? Are people making real money off this? How do we actually uniquely identify vulnerabilities in the cloud? CVE does not work well, right? What are the expected risk reduction benefits from Cloud VRP? Resources: Cloud VRP site Cloud VPR launch blog CVR: The Mines of Kakadûm…

1 EP219 Beyond the Buzzwords: Decoding Cyber Risk and Threat Actors in Asia Pacific 31:46
31:46
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי31:46
Guest: Steve Ledzian , APAC CTO, Mandiant at Google Cloud Topics: We've seen a shift in how boards engage with cybersecurity. From your perspective, what's the most significant misconception boards still hold about cyber risk, particularly in the Asia Pacific region, and how has that impacted their decision-making? Cybersecurity is rife with jargon. If you could eliminate or redefine one overused term, which would it be and why? How does this overloaded language specifically hinder effective communication and action in the region? The Mandiant Attack Lifecycle is a well-known model. How has your experience in the East Asia region challenged or refined this model? Are there unique attack patterns or actor behaviors that necessitate adjustments? Two years post-acquisition, what's been the most surprising or unexpected benefit of the Google-Mandiant combination? M-Trends data provides valuable insights, particularly regarding dwell time. Considering the Asia Pacific region, what are the most significant factors reducing dwell time, and how do these trends differ from global averages? Given your expertise in Asia Pacific, can you share an observation about a threat actor's behavior that is often overlooked in broader cybersecurity discussions? Looking ahead, what's the single biggest cybersecurity challenge you foresee for organizations in the Asia Pacific region over the next five years, and what proactive steps should they be taking now to prepare? Resources: EP177 Cloud Incident Confessions: Top 5 Mistakes Leading to Breaches from Mandiant EP156 Living Off the Land and Attacking Critical Infrastructure: Mandiant Incident Deep Dive EP191 Why Aren't More Defenders Winning? Defender’s Advantage and How to Gain it!…

1 EP218 IAM in the Cloud & AI Era: Navigating Evolution, Challenges, and the Rise of ITDR/ISPM 30:10
30:10
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי30:10
Guest: Henrique Teixeira , Senior VP of Strategy, Saviynt, ex-Gartner analyst Topics: How have you seen IAM evolve over the years, especially with the shift to the cloud, and now AI? What are some of the biggest challenges and opportunities these two shifts present? ITDR (Identity Threat Detection and Response) and ISPM (Identity Security Posture Management) are emerging areas in IAM. How do you see these fitting into the overall IAM landscape? Are they truly distinct categories or just extensions of existing IAM practices? Shouldn’t ITDR just be part of your Cloud DR or maybe even your SecOps tool of choice? It seems goofy to try to stand ITDR on its own when the impact of an identity compromise is entirely a function of what that identity can access or do, no? Regarding workload vs. human identity, could you elaborate on the unique security considerations for each? How does the rise of machine identities and APIs impact IAM approaches? We had a whole episode around machine identity that involved turtles–what have you seen in the machine identity space and how have you seen users mess it up? The cybersecurity world is full of acronyms. Any tips on how to create a memorable and impactful acronym? Resources: EP166 Workload Identity, Zero Trust and SPIFFE (Also Turtles!) EP182 ITDR: The Missing Piece in Your Security Puzzle or Yet Another Tool to Buy? EP127 Is IAM Really Fun and How to Stay Ahead of the Curve in Cloud IAM? EP94 Meet Cloud Security Acronyms with Anna Belak EP162 IAM in the Cloud: What it Means to Do It 'Right' with Kat Traxler EP199 Your Cloud IAM Top Pet Peeves (and How to Fix Them) EP188 Beyond the Buzzwords: Identity's True Role in Cloud and SaaS Security “Playing to Win: How Strategy Really Works” book “Open” book…

1 EP217 Red Teaming AI: Uncovering Surprises, Facing New Threats, and the Same Old Mistakes? 23:11
23:11
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי23:11
Guest: Alex Polyakov , CEO at Adversa AI Topics: Adversa AI is known for its focus on AI red teaming and adversarial attacks. Can you share a particularly memorable red teaming exercise that exposed a surprising vulnerability in an AI system? What was the key takeaway for your team and the client? Beyond traditional adversarial attacks, what emerging threats in the AI security landscape are you most concerned about right now? What trips most clients, classic security mistakes in AI systems or AI-specific mistakes? Are there truly new mistakes in AI systems or are they old mistakes in new clothing? I know it is not your job to fix it, but much of this is unfixable, right? Is it a good idea to use AI to secure AI? Resources: EP84 How to Secure Artificial Intelligence (AI): Threats, Approaches, Lessons So Far AI Red Teaming Reasoning LLM US vs China: Jailbreak Deepseek, Qwen, O1, O3, Claude, Kimi Adversa AI blog Oops! 5 serious gen AI security mistakes to avoid Generative AI Fast Followership: Avoid These First Adopter Security Missteps…

1 EP216 Ephemeral Clouds, Lasting Security: CIRA, CDR, and the Future of Cloud Investigations 31:43
31:43
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי31:43
Guest: James Campbell , CEO, Cado Security Chris Doman , CTO, Cado Security Topics: Cloud Detection and Response (CDR) vs Cloud Investigation and Response Automation( CIRA ) ... what’s the story here? There is an “R” in CDR, right? Can’t my (modern) SIEM/SOAR do that? What about this becoming a part of modern SIEM/SOAR in the future? What gets better when you deploy a CIRA (a) and your CIRA in particular (b)? Ephemerality and security, what are the fun overlaps? Does “E” help “S” or hurts it? What about compliance? Ephemeral compliance sounds iffy… Cloud investigations, what is special about them? How does CSPM intersect with this? Is CIRA part of CNAPP? A secret question, need to listen for it! Resources: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP67 Cyber Defense Matrix and Does Cloud Security Have to DIE to Win? EP158 Ghostbusters for the Cloud: Who You Gonna Call for Cloud Forensics Cloud security incidents (Rami McCarthy) Cado resources…

1 EP215 Threat Modeling at Google: From Basics to AI-powered Magic 26:03
26:03
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי26:03
Guest: Meador Inge , Security Engineer, Google Cloud Topics: Can you walk us through Google's typical threat modeling process? What are the key steps involved? Threat modeling can be applied to various areas. Where does Google utilize it the most? How do we apply this to huge and complex systems? How does Google keep its threat models updated? What triggers a reassessment? How does Google operationalize threat modeling information to prioritize security work and resource allocation? How does it influence your security posture? What are the biggest challenges Google faces in scaling and improving its threat modeling practices? Any stories where we got this wrong? How can LLMs like Gemini improve Google's threat modeling activities? Can you share examples of basic and more sophisticated techniques? What advice would you give to organizations just starting with threat modeling? Resources: EP12 Threat Models and Cloud Security EP150 Taming the AI Beast: Threat Modeling for Modern AI Systems with Gary McGraw EP200 Zero Touch Prod, Security Rings, and Foundational Services: How Google Does Workload Security EP140 System Hardening at Google Scale: New Challenges, New Solutions Threat Modeling manifesto EP176 Google on Google Cloud: How Google Secures Its Own Cloud Use Awesome Threat Modeling Adam Shostack “Threat Modeling: Designing for Security” book Ross Anderson “Security Engineering” book ”How to Solve It” book…

1 EP214 Reconciling the Impossible: Engineering Cloud Systems for Diverging Regulations 29:22
29:22
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי29:22
Guest: Archana Ramamoorthy , Senior Director of Product Management, Google Cloud Topics: You are responsible for building systems that need to comply with laws that are often mutually contradictory. It seems technically impossible to do, how do you do this? Google is not alone in being a global company with local customers and local requirements. How are we building systems that provide local compliance with global consistency in their use for customers who are similar in scale to us? Originally, Google had global systems synchronized around the entire planet–planet scale supercompute–with atomic clocks. How did we get to regionalized approach from there? Engineering takes a long time. How do we bring enough agility to product definition and engineering design to give our users robust foundations in our systems that also let us keep up with changing and diverging regulatory goals? What are some of the biggest challenges you face working in the trusted cloud space? Is there something you would like to share about being a woman leader in technology? How did you overcome the related challenges? Resources: Video “Compliance Without Compromise” by Jeanette Manfra (2020, still very relevant!) “Good to Great” book “Appreciative Leadership” book…

1 EP213 From Promise to Practice: LLMs for Anomaly Detection and Real-World Cloud Security 28:01
28:01
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי28:01
Guest: Yigael Berger , Head of AI, Sweet Security Topic: Where do you see a gap between the “promise” of LLMs for security and how they are actually used in the field to solve customer pains? I know you use LLMs for anomaly detection. Explain how that “trick” works? What is it good for? How effective do you think it will be? Can you compare this to other anomaly detection methods? Also, won’t this be costly - how do you manage to keep inference costs under control at scale? SOC teams often grapple with the tradeoff between “seeing everything” so that they never miss any attack, and handling too much noise. What are you seeing emerge in cloud D&R to address this challenge? We hear from folks who developed an automated approach to handle a reviews queue previously handled by people. Inevitably even if precision and recall can be shown to be superior, executive or customer backlash comes hard with a false negative (or a flood of false positives). Have you seen this phenomenon, and if so, what have you learned about handling it? What are other barriers that need to be overcome so that LLMs can push the envelope further for improving security? So from your perspective, LLMs are going to tip the scale in whose favor - cybercriminals or defenders? Resource: EP157 Decoding CDR & CIRA: What Happens When SecOps Meets Cloud EP194 Deep Dive into ADR - Application Detection and Response EP135 AI and Security: The Good, the Bad, and the Magical Andrej Karpathy series on how LLMs work Sweet Security blog…

1 EP212 Securing the Cloud at Scale: Modern Bank CISO on Metrics, Challenges, and SecOps 33:16
33:16
הפעל מאוחר יותר
הפעל מאוחר יותר
רשימות
לייק
אהבתי33:16
Guest: Dave Hannigan , CISO at Nu Bank Topics: Tell us about the challenges you're facing as CISO at NuBank and how are they different from your past life at Spotify? You're a big cloud based operation - what are the key challenges you're tracking in your cloud environments? What lessons do you wish you knew back in your previous CISO run [at Spotify]? What metrics do your team report for you to understand the security posture of your cloud environments? How do you know “your” cloud use is as secure as you want it to be? You're a former Googler, and I'm sure that's not why, so why did you choose to go with Google SecOps for your organization? Resources: “Moving shields into position: How you can organize security to boost digital transformation” blog and the paper . “For a successful cloud transformation, change your culture first” blog “Is your digital transformation secure? How to tell if your team is on the right path” ’ blog EP201 Every CTO Should Be a CSTO (Or Else!) - Transformation Lessons from The Hoff EP104 CISO Walks Into the Cloud: And The Magic Starts to Happen! EP141 Cloud Security Coast to Coast: From 2015 to 2023, What's Changed and What's the Same? EP209 vCISO in the Cloud: Navigating the New Security Landscape (and Don’t Forget Resilience!) “Thinking Fast and Slow” book “Turn the Ship Around” book…
ברוכים הבאים אל Player FM!
Player FM סורק את האינטרנט עבור פודקאסטים באיכות גבוהה בשבילכם כדי שתהנו מהם כרגע. זה יישום הפודקאסט הטוב ביותר והוא עובד על אנדרואיד, iPhone ואינטרנט. הירשמו לסנכרון מנויים במכשירים שונים.