All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I'll try this next , okay
I am trying to disable the Splunk Secure Gateway app in a clustered environment. However I dont see an option to disable the app in Apps -> Manage Apps. It only displays the current status of the app... See more...
I am trying to disable the Splunk Secure Gateway app in a clustered environment. However I dont see an option to disable the app in Apps -> Manage Apps. It only displays the current status of the app, which is "Active". I also tried the same in a single node installation, where there is an option to disable the app just next to its current status in the same menu, i.e. Apps -> Manage Apps.   So, how can I disable the Splunk Secure Gateway in the clustered environment ?
Another remarks. You shouldn't ever install any additional module directly into Splunk's python! If there is something what you are needing, then create on app (see dev.splunk.com) and add those lib... See more...
Another remarks. You shouldn't ever install any additional module directly into Splunk's python! If there is something what you are needing, then create on app (see dev.splunk.com) and add those libraries under it.
Case does matter - as far as Splunk is concerned they are two different hosts - you could try converting to lower case (index=windows) OR (index=cmdb sourcetype="snow:cmdb_ci_server" dv_name=*) | ev... See more...
Case does matter - as far as Splunk is concerned they are two different hosts - you could try converting to lower case (index=windows) OR (index=cmdb sourcetype="snow:cmdb_ci_server" dv_name=*) | eval asset_name=lower(coalesce(dv_name, host)) | stats dc(index) as idx_count, values(index) values(dv_os), values(dv_install_status) by asset_name
Try running the search in the search app and look at the job - here I have done a similar search but I don't have access to your data and my indexes don't hold any data as far back as a year so I hav... See more...
Try running the search in the search app and look at the job - here I have done a similar search but I don't have access to your data and my indexes don't hold any data as far back as a year so I have used the last hour and the same time the previous day index=_audit [| makeresults | eval latest=relative_time(now(),"@h") | eval row=mvrange(0,2) | mvexpand row | eval latest=relative_time(latest,"@h-".row."d") | eval earliest=relative_time(latest,"-1h") | table earliest latest] | bin span=1h _time | stats count by _time Go to Inspect Job Then Job Details Dashboard and look at the Map Phase Search String You should see the time periods being searched. They will be in epoch time so you can copy them into another search to show their formatted versions Do yours correlate to the values you were expecting
Hi @KendallW  Does the coalesce or rename command treat the hostnames differently if they are different in cases? One is lower case in one index and other index has the same hostname in Upper case. I... See more...
Hi @KendallW  Does the coalesce or rename command treat the hostnames differently if they are different in cases? One is lower case in one index and other index has the same hostname in Upper case. Is the merge case sensitive ?  For example,  HOST01 which is one of the values in host field of index=windows, is actually  host01 in index=cmdb ( under the dv_name) field.   That explains why the consolidation via coalesce or rename ain't working.
The No-JS solution works wonderfully and you get my karma points. However, it doesn't seem to acknowledge the $filename_token$ (which I also set after the search is done, no need for extra tokens,... See more...
The No-JS solution works wonderfully and you get my karma points. However, it doesn't seem to acknowledge the $filename_token$ (which I also set after the search is done, no need for extra tokens, the job_sid not being null is enough for depends), it always offers to save the file with name "results". Just a minor thing, it's still very usable and elegant.
Hi guys , i wanted to see predictive monitoring of in ITSI product, how can i see the free tour of it kindly help me please.
It shouldn't matter what is contained in the host field in the 'cmdb' index as we are overwriting it. There is no problem with overwriting default fields in a search. Regardless, I still can't see w... See more...
It shouldn't matter what is contained in the host field in the 'cmdb' index as we are overwriting it. There is no problem with overwriting default fields in a search. Regardless, I still can't see why your original query didn't work. - There may be some whitespace or other strange characters in some of the field values from one of the indexes causing them to not match with the other index. Are you able to check this?
That didn't work. Query does not show any results if we rename the dv_name to host. That is because host is a default field  and for index=cmdb, the host field originally contains the name of the Log... See more...
That didn't work. Query does not show any results if we rename the dv_name to host. That is because host is a default field  and for index=cmdb, the host field originally contains the name of the Log source (ServiceNow) sending over the asset information to splunk. Renaming it overwrites the default field. thanks for replying though.
  Hi @neerajs_81 try just renaming the dv_name field instead of creating a new field with coalesce, e.g.: (index=cmdb sourcetye=server) OR (index=windows) | rename dv_name as host | stats dc(index)... See more...
  Hi @neerajs_81 try just renaming the dv_name field instead of creating a new field with coalesce, e.g.: (index=cmdb sourcetye=server) OR (index=windows) | rename dv_name as host | stats dc(index) as idx_count, values(index) values(dv_os), values(dv_install_status) by host  
We have enabled On Demand Capture Session for capturing the memory leaks on of our node. After the session ends, we are unable to see the detection dashboard   
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, t... See more...
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, the same query ain't working. I have below situation: In Index=windows , the field "host" contains all the different hosts sending logs to Splunk. For example: Host01, Host02 etc. In another index=cmdb, the field "dv_name" contain the same hostnames sending logs.   Also, there are other fields like dv_status and dv_os in this index which i need to be part of final output So as explained above,  the common link is the host field, its name is different across the 2 index, but the values are same.   When i run the following 2 queries to get my expected output, it only pulls data from windows index. It completely avoids the other cmdb index, irrespective of the fact the cmdb index has data / events from same hosts in the time range whatever i select.     (index=windows) OR (index=cmdb sourcetype="snow:cmdb_ci_server" dv_name=*) | eval asset_name=coalesce(dv_name, host) | stats dc(index) as idx_count, values(index) values(dv_os), values(dv_install_status) by asset_name     Output it it showing:   asset_name idx_count index dv_os dv_status Host01 1 windows     Host02 1 windows       Expected output asset_name idx_count index dv_os dv_install_status Host01 2 windows, cmdb Windows Server Production Host02 2 windows, cmdb Windows Server Test
Thanks, But for me the problem is relative year data in chart I need.  If I select 2023, Aug 12 for last 30 days, then in the chart I need two line 2023 data from Now to -30 days 2022 data from "n... See more...
Thanks, But for me the problem is relative year data in chart I need.  If I select 2023, Aug 12 for last 30 days, then in the chart I need two line 2023 data from Now to -30 days 2022 data from "now-1y" to -30 days     Can we plot this in single time chart ?
2023 earliestTime 2023-07-10T00:00:00.000-07:00 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 2024-08-11T21:58:51.151-07:00 2022 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 20... See more...
2023 earliestTime 2023-07-10T00:00:00.000-07:00 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 2024-08-11T21:58:51.151-07:00 2022 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 2024-08-11T21:58:51.151-07:00 I can see 2022 and 2023 searches, but somehow I am not able to figure out where it is going wrong.
Hi @sidnakvee,  Welcome! I highly suggest checking out some of the free training offered by Splunk, especially this one about getting data into Splunk: https://education.splunk.com/Saba/Web_spf/NA10... See more...
Hi @sidnakvee,  Welcome! I highly suggest checking out some of the free training offered by Splunk, especially this one about getting data into Splunk: https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/guestapp/ledetail/cours000000000003373  To answer your question, it sounds like you would like to send data from your local Windows machine to Splunk Cloud using the UF. To do this, you will indeed need to edit the inputs.conf file, for example: [WinEventLog:Security] disabled = 0 [WinEventLog:Application] disabled = 0 [WinEventLog:System] disabled = 0 [monitor://C:\Path\To\Sysmon\Logs] disabled = 0  Make sure to restart Splunk on the UF after making any changes, so that the changes are applied.  Next, check that the UF is actually connected to your Splunk Cloud instance and forwarding its internal logs (index=_internal). If not, check the Splunk logs on the UF itself for any connectivity issues. The log files you want to check are "splunkd.log" and "metrics.log" located in ...\splunkforwarder\var\log\splunk\.
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible ... See more...
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible Thanks ^ Post edited by @Ryan.Paredez. Split the post into a new one and updated the subject. 
Thanks @yuanliu , let me organise my thoughts and query abit after the long weekend. cheers and appreciate the prompt reply and help !
Future Visitors: Python 3.7 is the version delivered with Splunk 9.2. Attempting to replace it with a newer version could have disastrous consequences.
In fact, I have asked that very question in 2017, did not get an answer and created this solution. Here is the link to my post: Solved: Re: Custom search command called multiple times - Splunk Commun... See more...
In fact, I have asked that very question in 2017, did not get an answer and created this solution. Here is the link to my post: Solved: Re: Custom search command called multiple times - Splunk Community There, you can find a more detailed description of my solution.