All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single ... See more...
Sometimes I set myself SPL conundrum challenges just to see how to solve them.  I realised I couldn't do something I thought would be quite straightforward.  For the dummy data below I want a single row resultset which tells me how many events of each UpgradeStatus and how  many events in total i.e. Total Completed Pending Processing 11 6 3 2   I don't know in advance what the different values of UpgradeStatus might be and I don't want to use addtotals (this is the challenge part). I came up with the solution below which kinda "misuses" xyseries (which I'm strangely proud of) .  I feel like I'm missing a more straightforward solution, other than addtotals   Anyone up for the challenge? Dummy data and solution (misusing xyseries) follows...   | makeresults format=csv data="ServerName,UpgradeStatus Server1,Completed Server2,Completed Server3,Completed Server4,Completed Server5,Completed Server6,Completed Server7,Pending Server8,Pending Server9,Pending Server10,Processing Server11,Processing" | stats count by UpgradeStatus | eventstats sum(count) as Total | xyseries Total UpgradeStatus count        
An extension of this: https://community.splunk.com/t5/Splunk-Search/Looking-at-yesterdays-data-but-need-to-filter-the-data-to-only/m-p/696758#M236798   I've created a dashboard on the above with a... See more...
An extension of this: https://community.splunk.com/t5/Splunk-Search/Looking-at-yesterdays-data-but-need-to-filter-the-data-to-only/m-p/696758#M236798   I've created a dashboard on the above with an input that adds the timewrap line when the option is selected yes and nothing when the option is selected no.   The issue I am having is when no is selected, the graph looks like the following when I select smaller time windows. Below I selected 4 hours but how can I only show the last 4 hours and not the previous window.   Query is as follows: index=foo  [| makeresults | fields - _time | addinfo | eval day=mvrange(0,2) | mvexpand day | eval earliest=relative_time(info_min_time,"-".day."d") | eval latest=relative_time(info_max_time,"-".day."d") | fields earliest latest] | timechart span=1m sum(value) as value | eval _time=_time  
Hello Splunkers How can i utilize a lookup in a correlation search showing the detected keyword in the search result ? its a requirement that the analyst shouldn't have the capability to view l... See more...
Hello Splunkers How can i utilize a lookup in a correlation search showing the detected keyword in the search result ? its a requirement that the analyst shouldn't have the capability to view lookups Thanks in advance.
Hi Splunk community!  I need to filter events from the Splunk_ta_Windows application by the EventCode, Account_Name and Source_Network_Address fields. Tell me, in what form should props.conf and tra... See more...
Hi Splunk community!  I need to filter events from the Splunk_ta_Windows application by the EventCode, Account_Name and Source_Network_Address fields. Tell me, in what form should props.conf and transform.conf be written and in what folder should they be located?
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkb... See more...
After upgrading Splunk from 8 to 9 version I've started to receive messages : " The Upgrade Readiness App detected 1 app with deprecated Python: splunk-rolling-upgrade " Can't find this app Splunkbase | apps. As far as I understand it's Splunk buit-in app? Should I delete it or how can I resolve this issue ? P"lease help.
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond"... See more...
I have to parse the timestamp of JSON logs and I would like to include subsecond precision. My JSON-Events start like this:     { "instant" : { "epochSecond" : 1727189281, "nanoOfSecond" : 202684061 }, ...       Thus I tried as config in props.conf:   TIME_FORMAT=%s,\n "nanoOfSecond" : %9N TIME_PREFIX="epochSecond" :\s MAX_TIMESTAMP_LOOKAHEAD=500     That did unfortunately not work.   What is the right way to parse this time stamp with subsecond precision?
How can we send a file as input to an API endpoint from custom spl commands developed for both Splunk Enterprise and Splunk Cloud, ensuring the API endpoint returns the desired enrichment details?
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got trunc... See more...
We have a Microsoft SQL database SP with a few hundred lines. When we tried to analyze the content of this SP under the controller tab Databases|Queries||Query Details|Query, the query text got truncated. Is there a setting can increase the captured SQL text size? The controller build is  24.6.3.  DBAgent version 23.6.0.0
we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=jo... See more...
we are using splunk forwarder to forward the jenkins data to splunk. Noticed that splunk does not display all the data. here is the example: index=jenkins_statistics (host=abc.com/*) event_tag=job_event job_name="*abc/develop*" | stats count by job_name, type returns completed = 74 and started = 118 Ideally whatever is started should also be completed. so can you help me figuring out what could be the problem?
I went through the process of stopping splunk on all components, untar the installation file to /opt directory with the -C option. After completing the untar, I ran the command and accept the upgrade... See more...
I went through the process of stopping splunk on all components, untar the installation file to /opt directory with the -C option. After completing the untar, I ran the command and accept the upgrade and license. All went well until the end when I get WARNING: web interface does not seem to be available. Everything else says done until the end, when I get the warning message.   I checked splunkd.log and I see this message:   ERROR ClusteringMgr [60815 MainThread] - pass4SymmKey setting in the clustering or general stanza of server.conf is set to empty or the default value. You must change it to a different value. I checked server.conf file and compared with the backup file i made of the entire Splunk etc/system/local directory and the config in the files are the same.
We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that... See more...
We use dynamic tags, like ticket numbers or alert IDs on all of our containers. We have a retention policy that deletes containers after a year of them not being updated. I would like something that removes all the unused tags, similar to that retention policy. So, if a tag with an event ID is no longer being used, it will delete the tag. We currently have thousands of tags and it starts to bug the UI. 
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September... See more...
I'm trying to resolve an issue where Splunk sends email reports, but the information exported as an attachment uses a "chron number" format for dates instead of a more readable format like "September 30, 2024." Where can I implement a fix for this, and how can I do it?
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) To... See more...
I have a Sample Data like below. Now i need to display single value count of Completed and Pending in 2 different single value panel with their percentage in the bracket. (Screenshot is Attached) Total=10 Completed=6 Pending=4 Now I need to display Single value count of completed 6(60%) and second single value count of Pending 4(40%) with Percentage in the bracket in the 2 Panels show in Photo. Please provide me the query ServerName             UpgradeStatus ==========         ============= Server1                     Completed Server2                     Completed Server3                     Completed Server4                     Completed Server5                     Completed Server6                     Completed Server7                     Pending Server8                     Pending Server9                     Pending Server10                  Pending  
Hi all, Since v9.3 there seem to be a different method for displaying nav menus. When you update the tag <label> tag of a view from external editor, those changes are not updated in navigation until... See more...
Hi all, Since v9.3 there seem to be a different method for displaying nav menus. When you update the tag <label> tag of a view from external editor, those changes are not updated in navigation until a local storage object is deleted.  /debug/refresh or restart splunk doesn't refresh the navigation.  I was able to to update navigation when deleting the following object -> chrome -> developer tools -> applications -> "local storage" ->  splunk-appnav:MYAPP:admin:en-GB:UUID containing the following datastructure..  { "nav": [ { "label": "Search", "uri": "/en-GB/app/testapp/search", "viewName": "search", "isDefault": true }, { "label": "testview", "uri": "/en-GB/app/testapp/testview", "viewName": "testview" } ], "color": null, "searchView": "search", "lastModified": 1727698963355 } I'm wondering why content of the nav is now saved on client side. This is a different behaviour than on v9.1 and v9.2. If i need to guess, they tried to improve response time of the webui. But how do i ensure that every user is receiving the latest version of navigation menu in an app?  Best regards, Andreas
The Cybersecurity Paradox: How Cloud Migration Addresses Security Concerns Thursday, October 17, 2024 | 10AM PT / 1PM ET Are you ready to transform your security operations and unlock the full pote... See more...
The Cybersecurity Paradox: How Cloud Migration Addresses Security Concerns Thursday, October 17, 2024 | 10AM PT / 1PM ET Are you ready to transform your security operations and unlock the full potential of the cloud? In this session, TekStream, an Elite Splunk Partner we'll dive into how cloud migration can not only enhance your security but also deliver measurable ROI through the AI-powered Splunk Cloud Platform (SaaS). By the end of the session, you'll be equipped with tools and best practices to strategize, optimize, and build the internal team to manage your security programs with long term efficiencies and scale. Don't miss this opportunity to educate, empower, and set your organization up for lasting success in the cloud. To Learn More Register Today!
We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but ... See more...
We have a report that generates data with the `outputlookup` command and we are in need to schedule it multiple times but with different time ranges. For this report, we want to run it each day but with different time ranges in sequential order. Each run requires the previous run to finish so it can load the lookup results for the next run. We cant just schedule a single report that updates the lookup because we need it to run on different time ranges each time it triggers. Is there any way we can schedule a report to run in this particular way? We thought about cloning it multiple times and schedule each one of them differently but it is not an ideal solution. Regards.
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Sp... See more...
Hello, I'm having troubles connection a Splunk instance with an Azure Storage Account. After the account was set i have configured my Splunk instance to connect with the Storage Account using the Splunk Add-on for Microsoft Cloud Services.  When i enter the Account Name and the Account Secret it gives this error: This was configured from "Configuration" > "Azure Storage Account" > "Add". I have controlled the Account Name and the Access Key, they are correct. Looking at the logs this was the only noticeble error that pops up:   log_level=ERROR pid=3270316 tid=MainThread file=storageaccount.py:validate:97 | Error <urllib3.connection.HTTPSConnection object at 0x7e14a4a8e940>: Failed to establish a new connection: [Errno -2] Name or service not known while verifying the credentials: Traceback (most recent call last):   other than this i saw some http requests with 502 error on the splunkd.log but i don't know if it is related.  I have checked to see if the Splunk machine could reach the azure resourse and it can. It can also do api calls correctly.  At this point i have no idea on what could cause this problem.  Do you guys have any idea on what controls i could do to see where is the problem? Did  i miss some configurations? Could it be some problems on the Azure side? If yes what controls should i do?  (used the ufficial guide https://splunk.github.io/splunk-add-on-for-microsoft-cloud-services/Configurestorageaccount/) Thanks a lot in advance for your help.   
Hi  I am looking to monitor the dispatch directory over time. I know I can get the current results by using this | rest /services/search/jobs | stats count But I am looking to run the test over 1... See more...
Hi  I am looking to monitor the dispatch directory over time. I know I can get the current results by using this | rest /services/search/jobs | stats count But I am looking to run the test over 1 minute and have a breakdown per minute of the increase in dispatch over time. Rob 
In my splunk dashboard i have table with time stamp of UST time .But my team is working in multiple timezone .So they want me to make default PST timezone in dashboard.How can we acheive this in splu... See more...
In my splunk dashboard i have table with time stamp of UST time .But my team is working in multiple timezone .So they want me to make default PST timezone in dashboard.How can we acheive this in splunk dashboard?
I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still se... See more...
I did a recent upgrade of Splunk, but now notice my clients are not phoning in for some reason. This is my first upgrade in production environment, any help troubleshooting would be great. I still see my client configs on the backend but not sure why they are not reporting on the GUI.