All Topics

Top

All Topics

Check out the bewitching Community Office Hours, Tech Talks, and Webinars we’ve conjured up for October below—no tricks, just treats! ️ ️    What are Community Office Hours? Community Off... See more...
Check out the bewitching Community Office Hours, Tech Talks, and Webinars we’ve conjured up for October below—no tricks, just treats! ️ ️    What are Community Office Hours? Community Office Hours is an interactive 60-minute Zoom series where participants can ask questions and engage with technical Splunk experts on various topics. Whether you're just starting your journey with Splunk or looking for best practices to take your deployment to the next level, Community Office Hours provides a safe and open environment for you to get help. If you have an issue you can’t seem to resolve, have a question you’re eager to get answered by Splunk experts, are exploring new use cases, or just want to sit and listen in, Community Office Hours is for you!   What are Tech Talks? Tech Talks are designed to accelerate adoption and ensure your success. In these engaging 60-minute sessions, we dive deep into best practices, share valuable insights, and explore additional use cases to expand your knowledge and proficiency with our products. Whether you're looking to optimize your workflows, discover new functionalities, or troubleshoot challenges, Tech Talks is your go-to resource.   SECURITY Tech Talks | What's New with SOAR 6.3 October 2, 2024 at 11am PT Join this Tech Talk to see what Splunk SOAR is delivering in version 6.3. During this session, the team will provide a deep dive into new features like end-user prompts, FedRAMP certification, and integrations with Splunk Enterprise Security to help empower your SOC. Office Hours | Risk-Based Alerting October 2, 2024 at 1pm PT This is your opportunity to ask questions related to your specific Splunk Risk-Based Alerting challenge or use case, including: Quick guidance set up the foundational and get started with RBA Essential steps of implementing RBA Best practices for proper creation of risk rules, modifiers, etc. Troubleshooting and optimizing your environment for successful implementation Anything else you’d like to learn! Office Hours | SOAR October 9, 2024 at 1pm PT This is your opportunity to ask questions related to your Specific Splunk SOAR needs and use cases, including: New features from our recent 6.3 release SOAR 6.3 and Enterprise Security 8.0 integrations and the unified TDIR workflow Using SOAR and Attack Analyzer together Best practices for developing playbooks, workbooks and process workflows SOAR Apps recommendations Automatic incident response, Automating threat hunting, penetration testing, etc. Success measurement Anything else you'd like to learn!  Generative AI for SPL -- Faster Results Tuesday, October 29, 2024 at 11AM PT See how with AI Assistant more users in your organization can get the full value of Splunk insights and lessen the burden on your administrators.   OBSERVABILITY Tech Talks | Experience the Impact of Synthetic Monitoring at Splunk October 1, 2024 at 11am PT Join Splunk’s Growth Engineering team in their third Tech Talk as they discuss their adoption of Splunk Synthetic Monitoring to gain visibility across website pages, track core web vitals, and evaluate API performance across both traditional monolithic and multi-cloud environments. And showcase how comprehensive visibility was essential to minimizing downtime costs, improving customer experience, and furthering our digital resilience. Office Hours | Digital Experience Monitoring (RUM + Synthetics) October 23, 2024 at 1pm PT This is your opportunity to ask questions related to your specific Digital Experience Management (DEM) questions with Splunk RUM and Splunk Synthetics, including: Gaining a full view of the end-user experience, through features like Session Replay Running front-end/back-end investigations to pinpoint errors Running synthetics tests to proactively predict app and website performance Anything else you’d like to learn!
We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that... See more...
We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that removes all the unused tags, similar to that retention policy. So, if a tag with an event ID is no longer being used, it will delete the tag. We currently have thousands of tags and it starts to bug the UI. 
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September... See more...
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September 30, 2024." Where can I implement a fix for this, and how can I do it?
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) To... See more...
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) Total=10 Completed=6 Pending=4 Now I need to display Single value count of completed 6(60%) and second single value count of Pending 4(40%) with Percentage in the bracket in the 2 Panels show in Photo. Please provide me the query ServerName             UpgradeStatus ==========         ============= Server1                     Completed Server2                     Completed Server3                     Completed Server4                     Completed Server5                     Completed Server6                     Completed Server7                     Pending Server8                     Pending Server9                     Pending Server10                  Pending  
Hi all, Since v9.3 there seem to be a different method for displaying nav menus. When you update the tag <label> tag of a view from external editor, those changes are not updated in navigation until... See more...
Hi all, Since v9.3 there seem to be a different method for displaying nav menus. When you update the tag <label> tag of a view from external editor, those changes are not updated in navigation until a local storage object is deleted.  /debug/refresh or restart splunk doesn't refresh the navigation.  I was able to to update navigation when deleting the following object -> chrome -> developer tools -> applications -> "local storage" ->  splunk-appnav:MYAPP:admin:en-GB:UUID containing the following datastructure..  { "nav": [ { "label": "Search", "uri": "/en-GB/app/testapp/search", "viewName": "search", "isDefault": true }, { "label": "testview", "uri": "/en-GB/app/testapp/testview", "viewName": "testview" } ], "color": null, "searchView": "search", "lastModified": 1727698963355 } I'm wondering why content of the nav is now saved on client side. This is a different behaviour than on v9.1 and v9.2. If i need to guess, they tried to improve response time of the webui. But how do i ensure that every user is receiving the latest version of navigation menu in an app?  Best regards, Andreas
The Cybersecurity Paradox: How Cloud Migration Addresses Security Concerns Thursday, October 17, 2024 | 10AM PT / 1PM ET Are you ready to transform your security operations and unlock the full pote... See more...
The Cybersecurity Paradox: How Cloud Migration Addresses Security Concerns Thursday, October 17, 2024 | 10AM PT / 1PM ET Are you ready to transform your security operations and unlock the full potential of the cloud? In this session, TekStream, an Elite Splunk Partner we'll dive into how cloud migration can not only enhance your security but also deliver measurable ROI through the AI-powered Splunk Cloud Platform (SaaS). By the end of the session, you'll be equipped with tools and best practices to strategize, optimize, and build the internal team to manage your security programs with long term efficiencies and scale. Don't miss this opportunity to educate, empower, and set your organization up for lasting success in the cloud. To Learn More Register Today!
We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but ... See more...
We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but with different time ranges in sequential order. Each run requires the previous run to finish so it can load the lookup results for the next run. We cant just schedule a single report that updates the lookup because we need it to run on different time ranges each time it triggers. Is there any way we can schedule a report to run in this particular way? We thought about cloning it multiple times and schedule each one of them differently but it is not an ideal solution. Regards.
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Sp... See more...
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Splunk Add-on for Microsoft Cloud Services.  When i enter the Account Name and the Account Secret it gives this error: This was configured from "Configuration" > "Azure Storage Account" > "Add". I have controlled the Account Name and the Access Key, they are correct. Looking at the logs this was the only noticeble error that pops up:   log_level=ERROR pid=3270316 tid=MainThread file=storageaccount.py:validate:97 | Error <urllib3.connection.HTTPSConnection object at 0x7e14a4a8e940>: Failed to establish a new connection: [Errno -2] Name or service not known while verifying the credentials: Traceback (most recent call last):   other than this i saw some http requests with 502 error on the splunkd.log but i don't know if it is related.  I have checked to see if the Splunk machine could reach the azure resourse and it can. It can also do api calls correctly.  At this point i have no idea on what could cause this problem.  Do you guys have any idea on what controls i could do to see where is the problem? Did  i miss some configurations? Could it be some problems on the Azure side? If yes what controls should i do?  (used the ufficial guide https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Configurestorageaccount/) Thanks a lot in advance for your help.   
Hi  I am looking to monitor the dispatch directory over time. I know I can get the current results by using this | rest /services/search/jobs | stats count But I am looking to run the test over 1... See more...
Hi  I am looking to monitor the dispatch directory over time. I know I can get the current results by using this | rest /services/search/jobs | stats count But I am looking to run the test over 1 minute and have a breakdown per minute of the increase in dispatch over time. Rob 
In my splunk dashboard i have table with time stamp of UST time .But my team is working in multiple timezone .So they want me to make default PST timezone in dashboard.How can we acheive this in splu... See more...
In my splunk dashboard i have table with time stamp of UST time .But my team is working in multiple timezone .So they want me to make default PST timezone in dashboard.How can we acheive this in splunk dashboard?
I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still se... See more...
I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still see my client configs on the backend but not sure why they are not reporting on the GUI. 
I'm using Splunk Enterprise 9.2.1 on Windows. On my search head a have a bunch of apps (40+) laid out as follows:   /etc/apps/myapp1 /etc/apps/myapp2 /etc/apps/myapp3   Etc. Each app has of c... See more...
I'm using Splunk Enterprise 9.2.1 on Windows. On my search head a have a bunch of apps (40+) laid out as follows:   /etc/apps/myapp1 /etc/apps/myapp2 /etc/apps/myapp3   Etc. Each app has of course it's own dashboards defined etc. Now i'd like to group all these dashboards under one app and create a menu system for them.  Now I control each app under GIT and can deploy them using a Devops cycle. What I would like to do is create this new app but simply reference the dashboards that reside in the other apps in this new app so that I keep my source/version control. is this possible or would I simlpy have to copy all the dashaboards/views into this new app?
I am working on obtaining all user logins for a specified domain, then displaying what percent of those logins were from compliant devices. I start by creating a couple fields for 'ease of reading' -... See more...
I am working on obtaining all user logins for a specified domain, then displaying what percent of those logins were from compliant devices. I start by creating a couple fields for 'ease of reading' - these fields do produce data as expected, however, the table comes out with 'null' for the percent values. I have tried the below variations in pipeflow unfortunately with similar results - when trying to create a 'total' value by creating then combining compliant and noncompliant to divide, the total field does not have data either. base search | eval DeviceCompliance='deviceDetail.isCompliant' | eval compliant=if(DeviceCompliance="true",DeviceCompliance,null()) | stats count as total by userPrincipalName | eval percent=((compliant/total)*100) | table userPrincipalName total percent base search | eval DeviceCompliance='deviceDetail.isCompliant' | eval compliant=if(DeviceCompliance="true",DeviceCompliance,null()) | eval noncompliant=if(DeviceCompliance="false",DeviceCompliance,null()) | eval total=sum(compliant+noncompliant) | stats count by userPrincipalName | table userPrincipalName compliant total | eval percent=((compliant/total)*100) | table userPrincipalName total percent  
I have to create a base search for a dashboard and I am kinda stuck. Any help would be appreciated. index=service msg.message="*uri=/v1/payment-options*" eHttpMethodType="GET" | fields index, msg.s... See more...
I have to create a base search for a dashboard and I am kinda stuck. Any help would be appreciated. index=service msg.message="*uri=/v1/payment-options*" eHttpMethodType="GET" | fields index, msg.springProfile,msg.transactionId,eHttpStatusCode,eHttpMethodType,eClientId,eURI | dedup msg.transactionId | rename msg.springProfile as springProfile | eval profile = case(like(springProfile, "%dev%"), "DEV", like(springProfile, "%qa%"), "QA", like(springProfile, "%uat%"), "UAT") | eval request= case(like(eURI, "%/v1/payment-options%"), "PaymentOptions", like(eURI, "%/v1/account%"), "AccountTransalation") | stats count as "TotalRequests", count(eval(eHttpStatusCode=201 or eHttpStatusCode=204 or eHttpStatusCode=200)) as "TotalSuccessfulRequests", count(eval(eHttpStatusCode=400)) as "Total400Faliures", count(eval(eHttpStatusCode=422)) as "Total422Faliures", count(eval(eHttpStatusCode=404)) as "Total404Faliures", count(eval(eHttpStatusCode=500)) as "Total500Faliures", by profile, eClientId Now that I want to include the stats in the basesearch else my values/events  would be truncated. My problem is I need to also count  | stats count as "TotalRequests", count(eval(eHttpStatusCode=201 or eHttpStatusCode=204 or eHttpStatusCode=200)) as "TotalSuccessfulRequests" by request  for each of the profile such as Dev, QA,UAT to display in 3 different panels. How to incorparate this in the above basesearch
I am getting error "could not create search". How to fix this error ? xml:: <input type="multiselect" token="environment"> <label>Environments</label> <choice value="cfp08">p08</choice> ... See more...
I am getting error "could not create search". How to fix this error ? xml:: <input type="multiselect" token="environment"> <label>Environments</label> <choice value="cfp08">p08</choice> <choice value="cfp07">p07</choice> <choice value="*">ALL</choice> <default>*</default> <valuePrefix>environment =</valuePrefix> <delimiter> OR </delimiter> <search> <query/> </search> <fieldForLabel>environment</fieldForLabel> <fieldForValue>environment</fieldForValue> </input>
Hello Splunker!!   Here’s your question rewritten in a business context and structured in points: 1. Objective: To free up disk space by deleting 1 month of data from a specific Splunk index conta... See more...
Hello Splunker!!   Here’s your question rewritten in a business context and structured in points: 1. Objective: To free up disk space by deleting 1 month of data from a specific Splunk index containing 1 year of data. 2. Key Considerations: - How can we verify that the deletion of 1 month of data from Splunk indexes is successful? - How long does Splunk typically take to delete this amount of data from the indexes? - Is there a way to monitor or observe the deletion of old buckets or data using the Splunk UI (via SPL queries)?   Thanks in advance!!  
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the result... See more...
Hi All,  We have created a table viz containing 2 level of dropdowns which has the same index and sourcetype. While implementing the Row Expansion JScript in the dashboard, we are getting the results in 2 levels, however, the second level expansion will get exit abruptly.    Also, we could notice that the pagination only works in the first level table (Inner child table row expansion) for the initial row we select and only once.  If we select the second row/entry in the same parent table, the Inner child table pagination will be in a freeze state. We need to reload the dashboard everytime to fix this. 
Hello, I am looking to configure POST request using webhook as an Alert action.But i can't see any authentication  How i add authentication in webhook
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I ... See more...
Hey Guys, I have a input that is monitoring a log from syslog. In this file theres data of multiple severity, its bad, but I was thinking I could use a transform to set sourcetype in props that I could use to format data. So I did this in inputs.conf: [udp://x.x.x.x:5514] index=cisco_asa sourcetype=cisco_firewall disabled=false and this logs from cisco asa Sep 20 15:36:41 10.10.108.122 %ASA-4-106023: Deny tcp src inside:x.x.x.x/xxxx dst outside:x.x.x.x/xxxx by access-group "Inside_access_in" [0x51fd3ce2, 0x0] Sep 20 15:36:37 10.10.108.122 %ASA-5-746015: user-identity: [FQDN] go.microsoft.com resolved x.x.x.x Sep 20 15:36:37 10.10.108.122 %ASA-6-302021: Teardown ICMP connection for faddr x.x.x.x/x gaddr x.x.x.x/x laddr x.x.x.x/x type 8 code 0 then I created a transforms.conf [set_log_type_critical] source_key = _raw regex = .*%ASA-4 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:alert [set_log_type_error] source_key = _raw regex = .*%ASA-5 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:critical [set_log_type_warnig] source_key = _raw regex = .*%ASA-6 dest_key=MetaData:Sourcetype format=sourcetype::cisco:firewall:error I also have a props that looks like [cisco:firewall] TRANSFORMS-setlogtype_alert=set_log_tyoe_critical TRANSFORMS-setlogtype_critical=set_log_tyoe_error TRANSFORMS-setlogtype_error=set_log_tyoe_warning My question is this: after all that I configured it, but the sourcetype separation is still not possible Do transforms and props look correct? Im testing locally so I can break things all day long. thanks for the assistance
Is there any linkdn profile to get all splunk updates and related cyber threats??