All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Just as an aside on the use of map, note that it is not a practical command for use on large datasets, as each map result gets executed in its own serial search, so it can take time and depending on ... See more...
Just as an aside on the use of map, note that it is not a practical command for use on large datasets, as each map result gets executed in its own serial search, so it can take time and depending on the search can cause a lot of overhead to iterate through large result sets.  Often there is an alternative way to write the search (but not always). Depends on the use case.
I'm trying to have a timechart showing the count of events by a category grouped by week. The search time is controlled by a radio button on the dashboard with options from 1w - 12 weeks with the end... See more...
I'm trying to have a timechart showing the count of events by a category grouped by week. The search time is controlled by a radio button on the dashboard with options from 1w - 12 weeks with the end date set to @w. I then have a drilldown that shows a table with more info about each event for that category in that time range. mysearch .... | dedup case_id | timechart span=1w count by case_category The chart looks fine but when I click on certain sections to load the drilldown, much more data appears than was suggested by the count in the timechart. For instance, looking at Nov 19-25, in the timechart it shows 26 events, but when I go to the drilldown it shows 61. When I open the drilldown search in Search, the issue seems to involve expanding the time range beyond one week. If I change the range from Nov 19-25 to Nov 19-27, the data from Nov 22-24 is either erased or reduced. Nov 19-25 stats count results: Nov 19: null Nov 20: 8 Nov 21: 14 Nov 22: 19 ** Nov 23: 20 ** Nov 24: 1 ** Nov 25: null Nov 19-28 stats count results: Nov 19: null Nov 20: 8 Nov 21: 14 Nov 22: 5 ** Nov 23: null ** Nov 24: null ** Nov 25: null Nov 26: null Nov 27: 35 Nov 28: 1
Thank you for that, @dhatch ! This feature should be defaulted to "true" in $SPLUNK_HOME/etc/system/default/server.conf, starting with Splunk 9.1.x. In previous versions, according to the web.conf.... See more...
Thank you for that, @dhatch ! This feature should be defaulted to "true" in $SPLUNK_HOME/etc/system/default/server.conf, starting with Splunk 9.1.x. In previous versions, according to the web.conf.spec file, this attribute is not set at all. Whenever making changes to .conf files in Splunk, please edit files from $SPLUNK_HOME/etc/system/local. If you do not yet have a web.conf file in that path when making such a change, create it, then include only the stanza(s) and attribute(s) you wish to modify. Settings configured via $SPLUNK_HOME/etc/system/local take precedence over all other configuration files in all Splunk instances EXCEPT for clustered indexers. In the case of clustered indexers, the apps deployed to indexers by the Cluster Manager take precedence even over $SPLUNK_HOME/etc/system/local.  See: Splunk documentation on precedence This useful community article  
Hi @aditsss ... if any reply solved your query, could you pls accept it as a solution..  karma points / upvotes are appreciated, thanks. 
Hi @abi2023 ...Please check if the Splunk Service running fine
Greetings Community! I have a question regarding the Splunk Cloud License (classic), particularly when the license expires.  - Is there a message informing the license is about to expire? - After ... See more...
Greetings Community! I have a question regarding the Splunk Cloud License (classic), particularly when the license expires.  - Is there a message informing the license is about to expire? - After the expiration date, is there any grace period provided? - In case I decide to not renew the license, are we able somehow to download the company data before its total removal? Or after my license is expired I lose all indexed data? Thanks in advance any information on this matter. Kind Regards, Marcelo  
I want to run an Enrichment playbook inside a custom function. Looking to pass in a list of devices and call the playbook each time passing in a single deviceId at a time. What is the best way to do ... See more...
I want to run an Enrichment playbook inside a custom function. Looking to pass in a list of devices and call the playbook each time passing in a single deviceId at a time. What is the best way to do this?
I am getting error messages WARNING: web interface does not seem to be available! I just install the Splunk on my Mac.
Right, I have to say I agree with you there and would recommend voting on these Ideas in order to raise awareness of the limitation.  e: to expand on that - the issue with even the solution I mentio... See more...
Right, I have to say I agree with you there and would recommend voting on these Ideas in order to raise awareness of the limitation.  e: to expand on that - the issue with even the solution I mentioned is that you would need to retroactively apply the *.meta access control approach to each existing and future secret on your deployment in order to actually address the use case  in the OP. 
I am trying to make a query which will give me the result of unique file names with month in column and a time span of 1 hour in row. Below is my query : index="app_cleo_db" origname="GEAC_Payroll*... See more...
I am trying to make a query which will give me the result of unique file names with month in column and a time span of 1 hour in row. Below is my query : index="app_cleo_db" origname="GEAC_Payroll*" | rex "\sorigname=\"GEAC_Payroll\((?<digits>\d+)\)\d{8}_\d{6}\.xml\"" | search origname="*.xml" | eval Date = strftime(_time, "%Y-%m-%d %H:00:00") | eval DateOnly = strftime(_time, "%Y-%m-%d") | transaction DateOnly, origname | timechart count by DateOnly But it is giving me an output with date as well as timestamp in the row like below: _time 2023-12-02 2023-12-03 2023-12-02 00:00:00 8 0 2023-12-02 00:30:00 0 0 2023-12-02 01:00:00 0 7 2023-12-02 01:30:00 0 0 2023-12-02 02:00:00 6 0 2023-12-02 02:30:00 0 0 2023-12-02 00:00:00 2 0 2023-12-03 00:30:00 0 5 2023-12-03 01:00:00 0 0 2023-12-03 01:30:00 0 20 2023-12-03 02:00:00 0 0 2023-12-03 02:30:00 34 0   I want the result to look like below _time 2023-12-02 2023-12-03 00:00:00 0 0 01:00:00 0 0  02:00:00 0 0 03:00:00 0 0
In my case, I forgot to enable listening on my indexers. (You'll need to do this on any heavy forwarders forwarders as well).  /opt/splunk/bin/splunk enable listen 9997
I see, I tried with different variables but _raw seems to hold all vital data in all cases, mabe I'm not doing something right, perhaps the part that is not in json format is the output inside the "s... See more...
I see, I tried with different variables but _raw seems to hold all vital data in all cases, mabe I'm not doing something right, perhaps the part that is not in json format is the output inside the "stdout" variable.   EDIT: Here's the event in log format   { [-] cluster_id: cluster_id kubernetes: { [+] } log: { [-] caller: caller_here dc: dc_here flow: flow_here host: gatling_worker_here jobId: jobid_here level: info projectName: project_name_here stdout: { "Componente" : "componente_here", "channel" : "channel_here", "timestamp" : "timestamp_here", "Code" : "code_here", "logId" : "logid_here", "service" : "service_here", "responseMessage" : "responsemessage_here", "flow" : "flow_here", "log" : "log_here"} } time: time_here }   stdout is the issue it seems
Thanks for your response. I have received this error message along with the results.  Search results might be incomplete! This can occur if the peer unexpectedly closes or resets the connection dur... See more...
Thanks for your response. I have received this error message along with the results.  Search results might be incomplete! This can occur if the peer unexpectedly closes or resets the connection during a planned restart. Try running your search again. 
You should have complete results based on your description and a couple of assumptions:   I am assuming that you are connecting to a remote Splunk instance (probably a Search Head (SH) - aka the Sp... See more...
You should have complete results based on your description and a couple of assumptions:   I am assuming that you are connecting to a remote Splunk instance (probably a Search Head (SH) - aka the Splunk website), and you are not running Splunk locally on your computer.  In that case, you would have complete results.  Your computer does not have to stay connected to a Splunk SH for a background search to complete properly.
Hi @Rinku.singh, Was there any part of the documentation that was confusing? I can share this with the Docs team to see if they can clarify anything. 
Add the prestats option to the tstats command.  That will format the results for timechart to use. | tstats prestats=t count where index IN (index1, index2, index3) by _time , host | where match(hos... See more...
Add the prestats option to the tstats command.  That will format the results for timechart to use. | tstats prestats=t count where index IN (index1, index2, index3) by _time , host | where match(host,"^.*.device.mycompany.com$") | timechart count by host  
Hi All,   I have a Splunk search query executing the in the background(used Send to background option) while this is running my VPN got disconnected and after sometime I have reconnected to VPN and... See more...
Hi All,   I have a Splunk search query executing the in the background(used Send to background option) while this is running my VPN got disconnected and after sometime I have reconnected to VPN and the query is still runing in the background. My question is does it gives me complete results or any incomplete results?   Thanks
Hi ate you sure that you have events on that time on those indexes? Another option is that saved search which are collecting those events haven’t run. See more https://docs.splunk.com/Documentation/... See more...
Hi ate you sure that you have events on that time on those indexes? Another option is that saved search which are collecting those events haven’t run. See more https://docs.splunk.com/Documentation/Splunk/9.1.2/Admin/LicenseUsageReportViewexamples r. Ismo
I added data to the SPL because I don't have your data indexed in my Splunk.  Since you have the data indexed, you can skip that part of my example query.  You may need to change the spath command ar... See more...
I added data to the SPL because I don't have your data indexed in my Splunk.  Since you have the data indexed, you can skip that part of my example query.  You may need to change the spath command argument to match your events.
Hi when you are using package manager like yum or dpkg, the installation add user splunk and change ownership of files to that user. To give access to local files you could grant access by setfacl ... See more...
Hi when you are using package manager like yum or dpkg, the installation add user splunk and change ownership of files to that user. To give access to local files you could grant access by setfacl per file or recursively by directory. r. Ismo