All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The No-JS solution works wonderfully and you get my karma points. However, it doesn't seem to acknowledge the $filename_token$ (which I also set after the search is done, no need for extra tokens,... See more...
The No-JS solution works wonderfully and you get my karma points. However, it doesn't seem to acknowledge the $filename_token$ (which I also set after the search is done, no need for extra tokens, the job_sid not being null is enough for depends), it always offers to save the file with name "results". Just a minor thing, it's still very usable and elegant.
Hi guys , i wanted to see predictive monitoring of in ITSI product, how can i see the free tour of it kindly help me please.
It shouldn't matter what is contained in the host field in the 'cmdb' index as we are overwriting it. There is no problem with overwriting default fields in a search. Regardless, I still can't see w... See more...
It shouldn't matter what is contained in the host field in the 'cmdb' index as we are overwriting it. There is no problem with overwriting default fields in a search. Regardless, I still can't see why your original query didn't work. - There may be some whitespace or other strange characters in some of the field values from one of the indexes causing them to not match with the other index. Are you able to check this?
That didn't work. Query does not show any results if we rename the dv_name to host. That is because host is a default field  and for index=cmdb, the host field originally contains the name of the Log... See more...
That didn't work. Query does not show any results if we rename the dv_name to host. That is because host is a default field  and for index=cmdb, the host field originally contains the name of the Log source (ServiceNow) sending over the asset information to splunk. Renaming it overwrites the default field. thanks for replying though.
  Hi @neerajs_81 try just renaming the dv_name field instead of creating a new field with coalesce, e.g.: (index=cmdb sourcetye=server) OR (index=windows) | rename dv_name as host | stats dc(index)... See more...
  Hi @neerajs_81 try just renaming the dv_name field instead of creating a new field with coalesce, e.g.: (index=cmdb sourcetye=server) OR (index=windows) | rename dv_name as host | stats dc(index) as idx_count, values(index) values(dv_os), values(dv_install_status) by host  
We have enabled On Demand Capture Session for capturing the memory leaks on of our node. After the session ends, we are unable to see the detection dashboard   
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, t... See more...
Hi All, i need to consolidate / correlate data from 2 different indexes as explained below. I have gone thru multiple posts on this forum from experts relevant to this but somehow for my use case, the same query ain't working. I have below situation: In Index=windows , the field "host" contains all the different hosts sending logs to Splunk. For example: Host01, Host02 etc. In another index=cmdb, the field "dv_name" contain the same hostnames sending logs.   Also, there are other fields like dv_status and dv_os in this index which i need to be part of final output So as explained above,  the common link is the host field, its name is different across the 2 index, but the values are same.   When i run the following 2 queries to get my expected output, it only pulls data from windows index. It completely avoids the other cmdb index, irrespective of the fact the cmdb index has data / events from same hosts in the time range whatever i select.     (index=windows) OR (index=cmdb sourcetype="snow:cmdb_ci_server" dv_name=*) | eval asset_name=coalesce(dv_name, host) | stats dc(index) as idx_count, values(index) values(dv_os), values(dv_install_status) by asset_name     Output it it showing:   asset_name idx_count index dv_os dv_status Host01 1 windows     Host02 1 windows       Expected output asset_name idx_count index dv_os dv_install_status Host01 2 windows, cmdb Windows Server Production Host02 2 windows, cmdb Windows Server Test
Thanks, But for me the problem is relative year data in chart I need.  If I select 2023, Aug 12 for last 30 days, then in the chart I need two line 2023 data from Now to -30 days 2022 data from "n... See more...
Thanks, But for me the problem is relative year data in chart I need.  If I select 2023, Aug 12 for last 30 days, then in the chart I need two line 2023 data from Now to -30 days 2022 data from "now-1y" to -30 days     Can we plot this in single time chart ?
2023 earliestTime 2023-07-10T00:00:00.000-07:00 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 2024-08-11T21:58:51.151-07:00 2022 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 20... See more...
2023 earliestTime 2023-07-10T00:00:00.000-07:00 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 2024-08-11T21:58:51.151-07:00 2022 latestTime 2024-08-09T00:00:00.000-07:00 modifiedTime 2024-08-11T21:58:51.151-07:00 I can see 2022 and 2023 searches, but somehow I am not able to figure out where it is going wrong.
Hi @sidnakvee,  Welcome! I highly suggest checking out some of the free training offered by Splunk, especially this one about getting data into Splunk: https://education.splunk.com/Saba/Web_spf/NA10... See more...
Hi @sidnakvee,  Welcome! I highly suggest checking out some of the free training offered by Splunk, especially this one about getting data into Splunk: https://education.splunk.com/Saba/Web_spf/NA10P2PRD105/guestapp/ledetail/cours000000000003373  To answer your question, it sounds like you would like to send data from your local Windows machine to Splunk Cloud using the UF. To do this, you will indeed need to edit the inputs.conf file, for example: [WinEventLog:Security] disabled = 0 [WinEventLog:Application] disabled = 0 [WinEventLog:System] disabled = 0 [monitor://C:\Path\To\Sysmon\Logs] disabled = 0  Make sure to restart Splunk on the UF after making any changes, so that the changes are applied.  Next, check that the UF is actually connected to your Splunk Cloud instance and forwarding its internal logs (index=_internal). If not, check the Splunk logs on the UF itself for any connectivity issues. The log files you want to check are "splunkd.log" and "metrics.log" located in ...\splunkforwarder\var\log\splunk\.
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible ... See more...
Hi  Can we create widgets that display the Drive utilized in Volume like MyComputer? I have to create a dashboard like the one above for separate partitions. Let me know if it is possible Thanks ^ Post edited by @Ryan.Paredez. Split the post into a new one and updated the subject. 
Thanks @yuanliu , let me organise my thoughts and query abit after the long weekend. cheers and appreciate the prompt reply and help !
Future Visitors: Python 3.7 is the version delivered with Splunk 9.2. Attempting to replace it with a newer version could have disastrous consequences.
In fact, I have asked that very question in 2017, did not get an answer and created this solution. Here is the link to my post: Solved: Re: Custom search command called multiple times - Splunk Commun... See more...
In fact, I have asked that very question in 2017, did not get an answer and created this solution. Here is the link to my post: Solved: Re: Custom search command called multiple times - Splunk Community There, you can find a more detailed description of my solution.
Reviving the thread a year later: I have the same problem, had it back in Splunk 6.6.2 and still seeing it in Splunk 8.2.6 years later. No idea why - but I really needed to work around it. Here is w... See more...
Reviving the thread a year later: I have the same problem, had it back in Splunk 6.6.2 and still seeing it in Splunk 8.2.6 years later. No idea why - but I really needed to work around it. Here is what I came up with: I open a file, with a name derived from search id, with exclusive access for creation in a special `lock` folder. If it succeeds, I proceed with the rest of the code. If it fails, I realize that it was already "caught" by the previous run of the same command and bail out. Of course, I need some way of tidying up that `lock` folder, which is something that can be done with a scripted input, and not too frequently - once a day or even once a week should be plenty. In theory, I should be able to remove (unlink) that lock file right from the second instance, but it bit me in the back, so I abandoned the idea. Might want to revisit now, after so many years...
The first row can easily be excluded because there is no Count.  But the weird _raw signifies some unusual characteristics.  Failure to extract db_bulk_write_time suggests the same.  You need to post... See more...
The first row can easily be excluded because there is no Count.  But the weird _raw signifies some unusual characteristics.  Failure to extract db_bulk_write_time suggests the same.  You need to post more realistic/representative data.
also in this case 2024-08-12 10:53:53.455 2.75 3   2000 s 2024-08-12 10:53:56.205 2.765 the 2nd row should be 2.75 instead
Hi ,   I am new to Spunk just got Free Cloud Trial. I did the followings : 1- Logged in to Cloud trial instance 2- Created Index name winpc   3- App > Univeral forwarded and downloaded on Win PC... See more...
Hi ,   I am new to Spunk just got Free Cloud Trial. I did the followings : 1- Logged in to Cloud trial instance 2- Created Index name winpc   3- App > Univeral forwarded and downloaded on Win PC 4- Installed Forwarded on WInPC during step on use this agent with selected use with cloud instance 5- Receiver index left blank had no idea about my splun instance FQDN /IP 6- Checked services Splunk universal forwarded service running as Logon As Local system Issues : 1- No Logs I can see into index winpc created after waiting a hour or so 2- How can I tell forwarded to forward win and sysmon logs too should I edit inputs.conf file ?   Kindly guide and help so that I may get logs and learn any further .   Regards  
I get and my first row count is empty _raw is weird too _time processing_time Count db_bulk_write_time no_msg_wait_time _raw 2024-08-12 10:55:41.200 1.226     1000 . 2024-08-12 10:55:... See more...
I get and my first row count is empty _raw is weird too _time processing_time Count db_bulk_write_time no_msg_wait_time _raw 2024-08-12 10:55:41.200 1.226     1000 . 2024-08-12 10:55:40.872 0.312 1   0 s 2024-08-12 10:55:37.122 3.75 1   3000 s 2024-08-12 10:55:36.809 0.313 1   0 s 2024-08-12 10:55:33.106 3.688 1   3000 s 2024-08-12 10:55:32.778 0.313 1   0 s 2024-08-12 10:55:29.028 3.75 1   3000 s 2024-08-12 10:55:28.700 0.328 1   0 s 2024-08-12 10:55:24.950 3.75 1   3000 s 2024-08-12 10:55:24.622 0.312 1   0 s 2024-08-12 10:55:21.888 2.734 1   2000 s 2024-08-12 10:55:20.122 1.766 1   1000 s
I requested again last week yet no reply.