All Posts

Top

All Posts

I also expected the LOG field to be extracted.  Were the changes to props/transforms installed on the first full Splunk instance the sees the data?  Was that instance restarted?  Is the screenshot sh... See more...
I also expected the LOG field to be extracted.  Were the changes to props/transforms installed on the first full Splunk instance the sees the data?  Was that instance restarted?  Is the screenshot showing new data (since the restart)?
In SOAR, the webserver is nginx. It has a configuration file at: ($SOARDIR$= your soar/phantom install directory. E.g. /opt/phantom or /data)  $SOARDIR$/usr/nginx/conf/phantom-nginx-server.conf ...... See more...
In SOAR, the webserver is nginx. It has a configuration file at: ($SOARDIR$= your soar/phantom install directory. E.g. /opt/phantom or /data)  $SOARDIR$/usr/nginx/conf/phantom-nginx-server.conf ... which includes the config in conf.d: $SOARDIR$/usr/nginx/conf/conf.d/phantom-nginx-server.conf which sets the SSL options: ssl_certificate /opt/phantom/etc/ssl/certs/httpd_cert.crt; ssl_certificate_key /opt/phantom/etc/ssl/private/httpd_cert.key; ssl_protocols TLSv1.2 TLSv1.3; ssl_prefer_server_ciphers on; ssl_ciphers ssl_session_cache shared:TLS:2m; ssl_dhparam /opt/phantom/etc/ssl/dhparams.pem;  
Unfortunately, those searches are of different types (one starts with a streaming search command, the other with report-generating tstats command) which means you can't combine them into one search a... See more...
Unfortunately, those searches are of different types (one starts with a streaming search command, the other with report-generating tstats command) which means you can't combine them into one search and process combined results or use multisearch to run both in parallel. You're limited to either using join command as you attempted or appending one results set to another and then do some summarizing stats. Having said that - I don't quite get how you imagine your desired output since the tstats will split results by three fields whereas your raw index search returns stats split only by host.
Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host:... See more...
Currently, I have a field called pluginText which is the following (italicized words are anonymized to what they represent): <plugin_output> The following software are installed on the remote host: Vendor Software  [version versionnumber] [installed on date] ... ... ... </plugin_output> I wish to extract out Vendor, Software and versionnumber to separate fields and require a rex to do so. I am unfamiliar with using rex on this type of list, so I was hoping someone could point me in the right direction
Not only is it possible, it's mandatory.  You don't have to worry about it, though, because Splunk manages it for you.
Not able to get that to work
Even using a field that has defined IP values doesn't work and returned the following error: "Streamed search execute failed because: Error in 'ipdetection' command: External search command exited u... See more...
Even using a field that has defined IP values doesn't work and returned the following error: "Streamed search execute failed because: Error in 'ipdetection' command: External search command exited unexpectedly with non-zero error code 1.." This works but you can't pass values to it within a query: | ipqualityscore field="IP Address" value="8.8.8.8"
Are you able to use the join command based on host? <search 1> | join host [<search 2>]
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count whe... See more...
I am trying to join two searches together to table the combined results by host. First search below is showing number of events in the last hour by host, index, and sourcetype: | tstats count where index=* by host, index, sourcetype | addtotals | sort -Total | fields - Total | rename count as events_latest_hour Second search is showing the ingest per hour in GB by host.  (index=_internal host=splunk_shc source=*license_usage.log* type=Usage) | stats sum(b) as Usage by h | eval Usage=round(Usage/1024/1024/1024,2) | rename h as host, Usage as usage_lastest_hour | addtotals | sort -Total | fields - Total Can you please help with how i would join these two searches together to display the host, index, sourcetype, events_latest_hour,  usage_lastest_hour Basically i want to table the results of the first search and also include the results "usage_lastest_hour"from the second search into the table.   
Hi, regex _raw is here the wrong command… regex - Splunk Documentation but rex seems wrong too rex - Splunk Documentation because it can't do a key value extraction in search. I found an odd ... See more...
Hi, regex _raw is here the wrong command… regex - Splunk Documentation but rex seems wrong too rex - Splunk Documentation because it can't do a key value extraction in search. I found an odd way tho handle this: | spath | rename _raw AS temp date AS _raw | extract pairdelim="|" kvdelim="=" | rename _raw as date temp as _raw reference: extract - Splunk Documentation Is this what you are searching for? Kind Regards
Register here . This thread is for the Community Office Hours session with the Splunk Threat Research Team on Detecting Remote Code Executions on Wed, Jun 5, 2024 at 1pm PT / 4pm ET.    This is you... See more...
Register here . This thread is for the Community Office Hours session with the Splunk Threat Research Team on Detecting Remote Code Executions on Wed, Jun 5, 2024 at 1pm PT / 4pm ET.    This is your opportunity to ask questions about using the latest security content developed by the Splunk Threat Research Team to detect RCEs, including:   How to find and access security content designed to help defend against RCEs Best practices and practical tips for using this content Specific questions about recently released content for detecting RCEs impacting Jenkins servers, Ivanti VPN devices, and Confluence Data Center and Server Anything else you'd like to learn!    Please submit your questions at registration or as comments below. You can also head to the #office-hours user Slack channel to ask questions (request access here).   Pre-submitted questions will be prioritized. After that, we will open the floor up to live Q&A with meeting participants.   Look forward to connecting!  
Hi All, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging ... See more...
Hi All, I have setup the Object and event input configuration in the salesforce TA, I am able to see the object logs but unable to see the event logs in splunk cloud.   Any directions of triaging the issue? Appropriate permissions are provided for the salesforce user.
I haven't found a definitive answer in any of the docs yet.  Is it possible to utilize Splunk Smartstore when everything is in Splunk Cloud and we do not have an on-prem Enterprise?
The subsearch derived the Member field from TeamMember so it would seem the main search, which uses the same index and sourcetype, would expect a field called "TeamMember" to come from the subsearch.... See more...
The subsearch derived the Member field from TeamMember so it would seem the main search, which uses the same index and sourcetype, would expect a field called "TeamMember" to come from the subsearch.  For a join to work properly, both sides must use the same field name(s).  This can be done using rename in the subsearch. Run the subsearch by itself with | format appended to see what the subsearch turns into.  That resulting string, inserted into the main search, is what produces the final result set.  Adjust the subsearch (or the join command itself) appropriately to get the results you want.
Done
Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars in April. This post will be updated monthly so be sure to bookmark it and check back for new events! What ... See more...
Take a look below to explore our upcoming Community Office Hours, Tech Talks, and Webinars in April. This post will be updated monthly so be sure to bookmark it and check back for new events! What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where participants can ask questions and engage with technical Splunk experts on a variety of topics. Whether you're just starting your journey with Splunk or looking for best practices to take your deployment to the next level, Community Office Hours provides a safe and open environment for you to get help. If you have an issue you can’t seem to resolve, have a question you’re eager to get answered by Splunk experts, are exploring new use cases, or just want to sit and listen in, Community Office Hours is for you! What are Tech Talks? Tech Talks are designed to accelerate adoption and ensure your success. In these engaging 60-minute sessions, we dive deep into best practices, share valuable insights, and explore additional use cases to expand your knowledge and proficiency with our products. Whether you're looking to optimize your workflows, discover new functionalities, or troubleshoot challenges, Tech Talks is your go-to resource. SECURITY Office Hours | Security: Automated Threat Analysis with Splunk Attack Analyzer April 17, 2024 at 1pm PT This Community Office Hours session with Neal Iyer, Sr. Principal Product Manager, will be focused on automated threat analysis with Splunk Attack Analyzer.  Join us for an office hours session to ask questions about how automated threat analysis can enhance your existing security workflows, including: Practical applications and common use cases How Splunk Attack Analyzer integrates with other Splunk security solutions  Anything else you'd like to learn! Tech Talk | How to Uplevel Your Threat Hunting with the PEAK Framework and Splunk April 24, 2024 at 11am PT This tech talk shares how the Splunk Threat Hunting team seamlessly integrated the PEAK Threat Hunting Framework into their workflow while leveraging Splunk. Explore Splunk’s end-to-end processes with tips and tricks to unleash a pipeline of hunters and turn the PEAK Threat Hunting framework from a concept into a powerful tool in your organization. Join Splunk threat hunters Sydney Marrone and Robin Burkett to learn about:   The PEAK threat-hunting framework  How you can customize PEAK for your environment  How to enable your SOC analysts to be successful threat hunters Real-world examples of PEAK hunt types for actionable insights   OBSERVABILITY Office Hours | Observability: APM: Session 2 April 10, 2024 at 1pm PT This is your opportunity to ask questions about your current Observability APM challenge or use case, including: Sending traces to APM Tracking service performance with dashboards Setting up deployment environments AutoDetect detectors Enabling Database Query Performance Setting up business workflows Implementing high-value features (Tag Spotlight, Trace View, Service Map) Anything else you'd like to learn! Tech Talk | Extending Observability Content to Splunk Cloud April 23, 2024 at 11am PT Learn how to leverage Splunk Observability data in your Splunk Cloud Platform! Improve your Splunk platform’s capabilities with Splunk Observability Cloud Accelerate root cause analysis with Related Content in Splunk Cloud Platform Enhance troubleshooting with Splunk Infrastructure Monitoring and Splunk Application Performance Monitoring Office Hours | Observability: Usage and Data Control April 24, 2024 at 1pm PT This is your opportunity to ask questions about your current Observability Usage and Data Control challenge or use case. Including: Metrics Pipeline Management in Splunk Infrastructure Monitoring (IM) Metric Cardinality Aggregation Rules Impact and benefits of data dropping Anything else you'd like to learn!
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Gett... See more...
Splunk Lantern is Splunk’s customer success center that provides advice from Splunk experts on valuable data insights, key use cases, and tips on managing Splunk more efficiently. We also host Getting Started Guides for a range of Splunk products, a library of Product Tips, and Data Descriptor articles that help you see everything that’s possible with data sources and data types in Splunk. This month we’re highlighting a brand new set of content on Lantern. Splunk Outcome Paths show you how to achieve common goals that many Splunk customers are looking for in order to run an efficient, performant Splunk implementation. As usual, we’re also sharing the full list of articles published over the past month. Read on to find out more.   Splunk Outcome Paths   In today’s dynamic business landscape, navigating toward desired outcomes requires a strategic approach. If you’re a newer Splunk customer or looking to expand your Splunk implementation, it might not always be clear how to do this while reducing costs, mitigating risks, improving performance, or increasing efficiencies. Splunk Outcome Paths have been designed to show you all the right ways to do all of these things. Each of these paths has been created and reviewed by Splunk experts who’ve seen the best ways to address specific business and technical challenges that can impact the smooth running of any Splunk implementation. Whatever your business size or type, Splunk Outcome Paths offer a range of strategies tailored to suit your individual needs: If you’re seeking to reduce costs, you can explore strategies such as reducing infrastructure footprint, minimizing search load, and optimizing storage. Mitigating risk involves implementing robust compliance measures, establishing disaster recovery protocols, and safeguarding against revenue impacts.  Improving performance means planning for scalability, enhancing data management, and optimizing systems.  Increasing efficiencies focuses on deploying automation strategies, bolstering data management practices, and assessing readiness for cloud migration.  Choosing a path with strategies tailored to your priorities can help you get more value from Splunk, and grow in clarity and confidence as you learn how to manage your implementation in a tried-and-true manner. We’re keen to hear more about what you think of Splunk Outcome Paths and whether there are any topics you’d like to see included in future. You can add a comment below to send your ideas to our team.   Use Case Explorer Updates   Splunk Lantern’s Use Case Explorer for Security and the Use Case Explorer for Observability have become popular tools with Splunk customers looking for a framework for their Security or Observability journey. But technology changes fast, and today’s organizations are under more pressure than ever from cyber threats, outages, and other challenges that leave little room for error. That’s why on team Lantern we’ve been working hard to realign our Use Case Explorers with Splunk’s latest thinking around how to achieve digital resilience. Our Use Case Explorers follow a prescriptive path for organizations to improve digital resilience across security and observability. Each of the Explorers start with use cases to help you achieve foundational visibility so you can access the information your teams need. With better visibility you can then integrate guided insights that help you respond to what's most important. From there, teams can be more proactive and automate processes, and ultimately focus on unifying workflows that provide sophisticated and fast resolutions for teams and customers.   If you haven’t yet checked out our Use Case Explorer for Security or the Use Case Explorer for Observability, take a look today, and drop us a comment below if there’s anything you’d like to see in a future update!   This Month’s New Articles   Here’s the rest of everything that’s new on Lantern, published over the month of March: Enhancing endpoint monitoring with threat intelligence De-identifying PII consistently with hashing in Edge Processor Using lessons learned from incidents to harden your SOC processes Using ingest actions to filter AWS CloudTrail Logs Using ingest actions to filter AWS VPC Flow Logs Applying Benford's law of distribution to spot fraud Proactive Response: Orchestrate response workflows Tracking assets when recovering from an incident Proactive Response: Automate threat analysis Proactive Response: Automate containment and response actions Optimized Workflows: Automate complete TDIR life cycle Optimized Workflows: Federate access and analytics Configuring Windows event logs for Enterprise Security use Unified Workflows: Align IT and Business with Service Monitoring Guided Insights: Understand the Impact of Changes Unified Workflows: Enable Self-Service Observability Foundational Visibility: Optimize Cloud Monitoring Proactive Response: Prevent Outages Proactive Response: Debug Problems in Microservices Proactive Response: Optimize End-User Experiences Enabling Windows event log process command line logging via group policy object We hope you’ve found this update helpful. Thanks for reading! Kaye Chapman, Senior Lantern Content Specialist for Splunk Lantern
If you edit your earlier answer to correct the syntax, I'll be able to mark it as the solution...
Hi @sajo.sam, I did some digging and found this info. We can see 401 when there is an issue either in the access key or in the account name   kubectl -n appdynamics create secret generic cl... See more...
Hi @sajo.sam, I did some digging and found this info. We can see 401 when there is an issue either in the access key or in the account name   kubectl -n appdynamics create secret generic cluster-agent-secret --from-literal=controller-key="myaccount access key valid" Can you please check and confirm if the access key you have used to create the secret is same with the access key under Settings#licenses#Account.   If not the same then please pass the same and repeat the steps of creating a secret and create yaml .
Hi @Ryan.Paredez  I tried but I'm stuck with another issue. The logs given below show it faces some errors with "Failed to send agent registration request: Post "accountname.saas.appdynamics.com:... See more...
Hi @Ryan.Paredez  I tried but I'm stuck with another issue. The logs given below show it faces some errors with "Failed to send agent registration request: Post "accountname.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration ": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" [ERROR]: 2024-04-09 11:20:38 - secretconfig.go:68 - Problem With Getting /opt/appdynamics/cluster-agent/secret-volume/api-user Secret: open /opt/appdynamics/cluster-agent/secret-volume/api-user: no such file or directory [INFO]: 2024-04-09 11:20:38 - main.go:78 - Kubernetes version: v1.29.0 [INFO]: 2024-04-09 11:20:38 - main.go:236 - Registering cluster agent with controller host : accountname.saas.appdynamics.com controller port : 8080 account name : accountname [WARNING]: 2024-04-09 11:20:38 - agentregistrationmodule.go:352 - "default" is not a valid namespace in your kubernetes cluster [INFO]: 2024-04-09 11:20:38 - agentregistrationmodule.go:356 - Established connection to Kubernetes API [INFO]: 2024-04-09 11:20:38 - agentregistrationmodule.go:68 - Cluster name: fromKube [INFO]: 2024-04-09 11:20:38 - agentregistrationmodule.go:119 - Initial Agent registration [ERROR]: 2024-04-09 11:21:08 - agentregistrationmodule.go:131 - Failed to send agent registration request: Post "accountname.saas.appdynamics.com:8080/sim/v2/agent/clusterRegistration": context deadline exceeded (Client.Timeout exceeded while awaiting headers) [ERROR]: 2024-04-09 11:21:08 - agentregistrationmodule.go:132 - clusterId: -1 [ERROR]: 2024-04-09 11:21:08 - agentregistrationmodule.go:134 - Registration properties: {} [INFO]: 2024-04-09 11:21:38 - agentregistrationmodule.go:119 - Initial Agent registration ^ Post edited by @Ryan.Paredez to remove mentions and links to Account name. For security and privacy reasons, please redact the name of your Account in Community posts.