All Topics

Top

All Topics

Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host:... See more...
Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host: Vendor Software  [version versionnumber] [installed on date] ... ... ... </plugin_output> I wish to extract out Vendor, Software and versionnumber to separate fields and require a rex to do so. I am unfamiliar with using rex on this type of list, so I was hoping someone could point me in the right direction
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count whe... See more...
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count where index=* by host, index, sourcetype | addtotals | sort -Total | fields - Total | rename count as events_latest_hour Second search is showing the ingest per hour in GB by host.  (index=_internal host=splunk_shc source=*license_usage.log* type=Usage) | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour | addtotals | sort -Total | fields - Total Can you please help with how i would join these two searches together to display the host, index, sourcetype, events_latest_hour,  usage_lastest_hour Basically i want to table the results of the first search and also include the results "usage_lastest_hour"from the second search into the table.   
Register here . This thread is for the Community Office Hours session with the Splunk Threat Research Team on Detecting Remote Code Executions on Wed, Jun 5, 2024 at 1pm PT / 4pm ET.    This is you... See more...
Register here . This thread is for the Community Office Hours session with the Splunk Threat Research Team on Detecting Remote Code Executions on Wed, Jun 5, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions about using the latest security content developed by the Splunk Threat Research Team to detect RCEs, including:   How to find and access security content designed to help defend against RCEs Best practices and practical tips for using this content Specific questions about recently released content for detecting RCEs impacting Jenkins servers, Ivanti VPN devices, and Confluence Data Center and Server Anything else you'd like to learn!    Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).   Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!  
Hi All, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging ... See more...
Hi All, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging the issue? Appropriate permissions are provided for the salesforce user.
I haven't found a definitive answer in any of the docs yet.  Is it possible to utilize Splunk Smartstore when everything is in Splunk Cloud and we do not have an on-prem Enterprise?
Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars in April. This post will be updated monthly so be sure to bookmark it and check back for new events! What ... See more...
Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars in April. This post will be updated monthly so be sure to bookmark it and check back for new events! What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where participants can ask questions and engage with technical Splunk experts on a variety of topics. Whether you're just starting your journey with Splunk or looking for best practices to take your deployment to the next level, Community Office Hours provides a safe and open environment for you to get help. If you have an issue you can’t seem to resolve, have a question you’re eager to get answered by Splunk experts, are exploring new use cases, or just want to sit and listen in, Community Office Hours is for you! What are Tech Talks? Tech Talks are designed to accelerate adoption and ensure your success. In these engaging 60-minute sessions, we dive deep into best practices, share valuable insights, and explore additional use cases to expand your knowledge and proficiency with our products. Whether you're looking to optimize your workflows, discover new functionalities, or troubleshoot challenges, Tech Talks is your go-to resource. SECURITY Office Hours | Security: Automated Threat Analysis with Splunk Attack Analyzer April 17, 2024 at 1pm PT This Community Office Hours session with Neal Iyer, Sr. Principal Product Manager, will be focused on automated threat analysis with Splunk Attack Analyzer.  Join us for an office hours session to ask questions about how automated threat analysis can enhance your existing security workflows, including: Practical applications and common use cases How Splunk Attack Analyzer integrates with other Splunk security solutions  Anything else you'd like to learn! Tech Talk | How to Uplevel Your Threat Hunting with the PEAK Framework and Splunk April 24, 2024 at 11am PT This tech talk shares how the Splunk Threat Hunting team seamlessly integrated the PEAK Threat Hunting Framework into their workflow while leveraging Splunk. Explore Splunk’s end-to-end processes with tips and tricks to unleash a pipeline of hunters and turn the PEAK Threat Hunting framework from a concept into a powerful tool in your organization. Join Splunk threat hunters Sydney Marrone and Robin Burkett to learn about:   The PEAK threat-hunting framework  How you can customize PEAK for your environment  How to enable your SOC analysts to be successful threat hunters Real-world examples of PEAK hunt types for actionable insights   OBSERVABILITY Office Hours | Observability: APM: Session 2 April 10, 2024 at 1pm PT This is your opportunity to ask questions about your current Observability APM challenge or use case, including: Sending traces to APM Tracking service performance with dashboards Setting up deployment environments AutoDetect detectors Enabling Database Query Performance Setting up business workflows Implementing high-value features (Tag Spotlight, Trace View, Service Map) Anything else you'd like to learn! Tech Talk | Extending Observability Content to Splunk Cloud April 23, 2024 at 11am PT Learn how to leverage Splunk Observability data in your Splunk Cloud Platform! Improve your Splunk platform’s capabilities with Splunk Observability Cloud Accelerate root cause analysis with Related Content in Splunk Cloud Platform Enhance troubleshooting with Splunk Infrastructure Monitoring and Splunk Application Performance Monitoring Office Hours | Observability: Usage and Data Control April 24, 2024 at 1pm PT This is your opportunity to ask questions about your current Observability Usage and Data Control challenge or use case. Including: Metrics Pipeline Management in Splunk Infrastructure Monitoring (IM) Metric Cardinality Aggregation Rules Impact and benefits of data dropping Anything else you'd like to learn!
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re highlighting a brand new set of content on Lantern. Splunk Outcome Paths show you how to achieve common goals that many Splunk customers are looking for in order to run an efficient, performant Splunk implementation. As usual, we’re also sharing the full list of articles published over the past month. Read on to find out more.   Splunk Outcome Paths   In today’s dynamic business landscape, navigating toward desired outcomes requires a strategic approach. If you’re a newer Splunk customer or looking to expand your Splunk implementation, it might not always be clear how to do this while reducing costs, mitigating risks, improving performance, or increasing efficiencies. Splunk Outcome Paths have been designed to show you all the right ways to do all of these things. Each of these paths has been created and reviewed by Splunk experts who’ve seen the best ways to address specific business and technical challenges that can impact the smooth running of any Splunk implementation. Whatever your business size or type, Splunk Outcome Paths offer a range of strategies tailored to suit your individual needs: If you’re seeking to reduce costs, you can explore strategies such as reducing infrastructure footprint, minimizing search load, and optimizing storage. Mitigating risk involves implementing robust compliance measures, establishing disaster recovery protocols, and safeguarding against revenue impacts.  Improving performance means planning for scalability, enhancing data management, and optimizing systems.  Increasing efficiencies focuses on deploying automation strategies, bolstering data management practices, and assessing readiness for cloud migration.  Choosing a path with strategies tailored to your priorities can help you get more value from Splunk, and grow in clarity and confidence as you learn how to manage your implementation in a tried-and-true manner. We’re keen to hear more about what you think of Splunk Outcome Paths and whether there are any topics you’d like to see included in future. You can add a comment below to send your ideas to our team.   Use Case Explorer Updates   Splunk Lantern’s Use Case Explorer for Security and the Use Case Explorer for Observability have become popular tools with Splunk customers looking for a framework for their Security or Observability journey. But technology changes fast, and today’s organizations are under more pressure than ever from cyber threats, outages, and other challenges that leave little room for error. That’s why on team Lantern we’ve been working hard to realign our Use Case Explorers with Splunk’s latest thinking around how to achieve digital resilience. Our Use Case Explorers follow a prescriptive path for organizations to improve digital resilience across security and observability. Each of the Explorers start with use cases to help you achieve foundational visibility so you can access the information your teams need. With better visibility you can then integrate guided insights that help you respond to what's most important. From there, teams can be more proactive and automate processes, and ultimately focus on unifying workflows that provide sophisticated and fast resolutions for teams and customers.   If you haven’t yet checked out our Use Case Explorer for Security or the Use Case Explorer for Observability, take a look today, and drop us a comment below if there’s anything you’d like to see in a future update!   This Month’s New Articles   Here’s the rest of everything that’s new on Lantern, published over the month of March: Enhancing endpoint monitoring with threat intelligence De-identifying PII consistently with hashing in Edge Processor Using lessons learned from incidents to harden your SOC processes Using ingest actions to filter AWS CloudTrail Logs Using ingest actions to filter AWS VPC Flow Logs Applying Benford's law of distribution to spot fraud Proactive Response: Orchestrate response workflows Tracking assets when recovering from an incident Proactive Response: Automate threat analysis Proactive Response: Automate containment and response actions Optimized Workflows: Automate complete TDIR life cycle Optimized Workflows: Federate access and analytics Configuring Windows event logs for Enterprise Security use Unified Workflows: Align IT and Business with Service Monitoring Guided Insights: Understand the Impact of Changes Unified Workflows: Enable Self-Service Observability Foundational Visibility: Optimize Cloud Monitoring Proactive Response: Prevent Outages Proactive Response: Debug Problems in Microservices Proactive Response: Optimize End-User Experiences Enabling Windows event log process command line logging via group policy object We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
I have a timestamp with this format "2024-01-01T20:00:00.190000000Z" I can convert this to normal format using rex, however, I want to know is there a alternative to convert to normal time format?
Hi. I'm trying to use the subsearch, but I'm not what I am doing wrong. First the inner search is a list of account like this one. index=main sourcetype=vpacmanagement |eval DateStamp3= strptime(D... See more...
Hi. I'm trying to use the subsearch, but I'm not what I am doing wrong. First the inner search is a list of account like this one. index=main sourcetype=vpacmanagement |eval DateStamp3= strptime(DateStamp, "%Y-%m-%d %H:%M:%S") | eval MemberName2 = split(TeamMember, "\\") | eval Member2 = mvindex(MemberName2,1) | eval Member2=upper(Member2) | where DateStamp3 > relative_time(now(), "-4d") AND like(Status, "%/%/%") AND Member2 = "ADMMICHAEL_HAYES3" |dedup WONumber | rename Member2 as Member | fields Member I get one account, all ok so far. But using the search in an outer search. index=main sourcetype=vpacmanagement|join Member[search index=main sourcetype=vpacmanagement |eval DateStamp3= strptime(DateStamp, "%Y-%m-%d %H:%M:%S") | eval MemberName2 = split(TeamMember, "\\") | eval Member2 = mvindex(MemberName2,1) | eval Member2=upper(Member2) | where DateStamp3 > relative_time(now(), "-4d") AND like(Status, "%/%/%") AND Member2 = "ADMMICHAEL_HAYES3" |dedup WONumber | rename Member2 as Member | fields Member] | eval DateStamp2= strptime(DateStamp, "%Y-%m-%d %H:%M:%S") | eval month = strftime(DateStamp2, "%m") | eval year = strftime(DateStamp2, "%Y") | eval GroupName = split(DomainGroup, "\\"), MemberName = split(TeamMember, "\\") | eval Name = mvindex(GroupName,1), Member = mvindex(MemberName,1) | eval RequestType = upper(RequestType), Name = upper(Name), Member=upper(Member) | where not like(Status, "%/%/%") and DateStamp2 > relative_time(now(), "-2d") |dedup RequestType,DomainGroup, TeamMember | fields WONumber, DateStamp, ResourceSteward, RequestType, Name, Member, Status | table WONumber, DateStamp, ResourceSteward, RequestType, Name,Member, Status | sort DateStamp2   If you see I made some calculation and I'm using Member field as value to make the join, but still is not getting any account from the outer, and in fact the element exists in the outer search, does anyone knows what am I missing? Thanks  
Where is the web server actually installed to and ran from for SOAR in a RHEL environment? Unlike Splunk Web UI where I can modify the web.conf file, for SOAR I only see a massive amount of py files ... See more...
Where is the web server actually installed to and ran from for SOAR in a RHEL environment? Unlike Splunk Web UI where I can modify the web.conf file, for SOAR I only see a massive amount of py files everywhere. I need to figure out where it actually starts and sets it's paths. Specifically where SSL is chosen. Assume I have installed SOAR to /data   Thanks for any assistance!
I have an alert based on the below search (obfuscated):   ... | eval APPDIR=source | rex field=APPDIR mode=sed "s|/logs\/.*||g" | eventstats values(APPDIR) as APPDIRS | eval Level=if("/app/5000" IN... See more...
I have an alert based on the below search (obfuscated):   ... | eval APPDIR=source | rex field=APPDIR mode=sed "s|/logs\/.*||g" | eventstats values(APPDIR) as APPDIRS | eval Level=if("/app/5000" IN (APPDIRS), "PRODUCTION", "Non-production") | eval APPDIRS=mvjoin(APPDIRS, ",")   The idea is to discern the affected application-instance (there are multiple logs under each of the /app/instance/logs/) and then to determine, whether the instance is a production one or not. In the search-results all three new fields (APPDIR, APPDIRS, and Level) are populated as expected. But they don't show up in the e-mails. The "Subject: $Level$ app in $APPDIRS$" expands to mere "Subject:  app in ". Nor are the fields expanded in the body of the alert e-mail. Now, I understand, that event-specific fields -- like the singular APPDIR above -- cannot be expected to work in an alert. But the plural APPDIRS, as well as the Level, are aggregates, aren't they? What am I doing wrong, and how do I fix it?
Hello,   Can someone help me in extracting the fields from this nested json raw logs?   {"eventVersion":"1.09","userIdentity":{"type":"AssumedRole","principalId":"AROAUDGMTGGHXY5YL2EW6:redloc... See more...
Hello,   Can someone help me in extracting the fields from this nested json raw logs?   {"eventVersion":"1.09","userIdentity":{"type":"AssumedRole","principalId":"AROAUDGMTGGHXY5YL2EW6:redlock","arn":"arn:aws:sts::281749434767:assumed-role/PrismaCloudRole-804603675133320192-member/redlock","accountId":"281749434767","accessKeyId":"ASIAUDGMTGGHRRR2WZT2","sessionContext":{"sessionIssuer":{"type":"Role","principalId":"AROAUDGMTGGHXY5YL2EW6","arn":"arn:aws:iam::281749434767:role/PrismaCloudRole-804603675133320192-member","accountId":"281749434767","userName":"PrismaCloudRole-804603675133320192-member"},"attributes":{"creationDate":"2024-04-09T05:58:35Z","mfaAuthenticated":"false"}}},"eventTime":"2024-04-09T12:43:01Z","eventSource":"athena.amazonaws.com","eventName":"ListWorkGroups","awsRegion":"us-west-2","sourceIPAddress":"52.52.50.152","userAgent":"Vert.x-WebClient/4.4.6","requestParameters":{"maxResults":50},"responseElements":null,"requestID":"59f0ad81-7607-40bb-a40b-eab3fad0fb7a","eventID":"4bc352ff-0cc5-49cb-9b0e-2784bffbb58f","readOnly":true,"eventType":"AwsApiCall","managementEvent":true,"recipientAccountId":"281749434767","eventCategory":"Management","tlsDetails":{"tlsVersion":"TLSv1.3","cipherSuite":"TLS_AES_128_GCM_SHA256","clientProvidedHostHeader":"athena.us-west-2.amazonaws.com"}} logSource: aws-controltower/CloudTrailLogs:o-bj312h8hh6_281749434767_CloudTrail_us-east-1 logSourceType: aws:cloudwatchlogs  
App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test faile... See more...
App  started successfully (id: 1712665900147) on asset: Loaded action execution configuration executing action: test_asset_connectivity Connecting to 192.168.208.144... Connectivity test failed 1 action failed Failed to connect to PHANTOM server. No route to host. Connectivity test failed i am facing this issue  i tried all the possible way
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show even... See more...
Hi all, I created a volume and changed all homePath for all indexes to use this volume. Now I can't search on events that existed before this volume was created, and the search heads only show events that are on this volume. How can I move old and existing events to this volume so I can search on them? Thank you.
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm cha... See more...
Hello guys, so I'm currently trying to set Splunk Enterprise in a cluster architecture  (3 search heads and 3 indexers) on Kubernetes using the official Splunk operator and Splunk enterprise helm chart, so while trying to change the initial admin credentials on all the instances, I face the following issue where all instance will be up and ready as Kubernetes pods for except the indexers where they will not start and remain in an error phase without any logs indicating the reason for this, so the following is a snippet of my values.yaml file which is being provided for the Splunk Enterprise chart:   sva: c3: enabled: true indexerClusters: - name: idx searchHeadClusters: - name: shc indexerCluster: enabled: true name: "idx" replicaCount: 3 defaults: splunk: hec_disabled: 0 hec_enableSSL: 0 hec_token: "test" password: "admintest" pass4SymmKey: "test" idxc: secret: "test" shc: secret: "test" extraEnv: - name: SPLUNK_DEFAULTS_URL value: "/mnt/splunk-defaults/default.yml"   Initially, I was not passing the "SPLUNK_DEFAULTS_URL", but after some debugging the "defaults" field will write in "/mnt/splunk-defaults/default.yml" only, and by default, all instances read from "/mnt/splunk-secrets/default.yml" so I had to change it, and so what happened admin password had changed on all Splunk instances to "admintest" but the issue is indexers pods would not start. Note: I tried to change the password by providing the "SPLUNK_PASSWORD" environment variable to all instances but the same behavior.
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associatio... See more...
Hi all, Since the redesign of the new Incident Review page, we appear to have lost the ability to search for Notables using a ShortID. With the old dashboard this was achieved by selecting Associations from the filters and entering the ShortID you were looking for, but the new Incident Review dashboard appears to have taken this functionality away. Is there any way to achieve this?
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific me... See more...
Hi All, One of our teams has implemented an incoming webhook from Splunk into MS Teams to post an message when an alert is triggered. We encountered what seems to be a bug where for one specific message it was unable to be replied to or reacted to. Strangely enough viewing the message on a mobile would allow you to reply and react to it. Every other alert message before and after we have been able to be reply to.  
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS... See more...
I am trying to find the duration for a time span. The "in" and "out" numbers are included in the data as type: number. I attempted: in = 20240401183030 out = 20240401193030 | convert mktime(in) AS IN | convert mktime(out) AS OUT | eval Duration =OUT - IN I have not been able to find a function that would directly convert number to time or if there is some multifunctional way to get the right duration between the two, But this does not perform the correct time math. 
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I ... See more...
Hi all, thank in advance for your time! I have a problem writing a properly working query with this case study: I need to take data from index=email1 to find matching data from index=email2. I tried to do it this way: from index=email1 I take the fields src_user and recipient and use the appropriate search to look for it in the email2 index. Query examples that I used: index=email1 sourcetype=my_sourcetype source_user=* [ search index=email2 sourcetype=my_sourcetype source_user=* | fields source_user ] OR index=email1 sourcetype=my_sourcetype | join src_user, recipient [search index=emai2 *filters*] Everything looked OK in the control sample (I found events in a 10-minute window, e.g. 06:00-06:10), which at first glance matched, but when I extended the search time, e.g. to 24h, it did not show me any events, even those that matched in a short time window (even though they were in these 24 hours). Thank you for any ideas or solutions for this case.