All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

_get_all_passwords is deprecated, please use get_all_passwords_in_realm instead.
If I understand your requirements correctly, the easiest approach would be to use the transaction command with relatively low thresholds for transaction continuity. But the transaction command is rel... See more...
If I understand your requirements correctly, the easiest approach would be to use the transaction command with relatively low thresholds for transaction continuity. But the transaction command is relatively resource-intensive so you might want to try streamstats-based approach instead.
If I understand correctly, you want to know when the equipment changed to its current status so long as the current status is not "null"? Try something like this: | eventstats last(Status) as lastS... See more...
If I understand correctly, you want to know when the equipment changed to its current status so long as the current status is not "null"? Try something like this: | eventstats last(Status) as lastStatus by EquipmentID | where lastStatus!="null" | streamstats last(Status) as previous current=f global=f by EquipmentID | where Status=lastStatus and Status != previous | stats last(_time) as lastTime last(Status) as lastStatus by EquipmentID | eval duration=now()-lastTime
Hello, I have the following data:  I want to use this data to setup a dashboard. In this dashboard I want to show the current duration of equipment where the Status is not "null" (null is a string... See more...
Hello, I have the following data:  I want to use this data to setup a dashboard. In this dashboard I want to show the current duration of equipment where the Status is not "null" (null is a string in this case and not a null value) Each JobID only has one EquipmentID The same status can occur and disappear multiple times per JobID There are around 10 different status I want to the results to show only durations above 60 seconds If the current time is 12:21 I would like the to look like this. EquipmentID   Duration Most_recent_status 2 120 Z   Time EquipmentID Status JobID 12:00 1 "null" 10 12:01 2 "null" 20 12:02 2 X 20 12:03 2 X 20 12:04 1 X 10 12:05 1 Y 10 12:06 1 Y 20 12:07 2 Y 20 12:08 1 X 10 12:09 2 Y 20 12:10 1 "null" 11 12:11 2 "null" 21 12:12 2 "null" 21 12:13 1 "null" 11 12:14 1 "null" 11 12:15 2 X 21 12:16 1 X 11 12:17 2 X 21 12:18 1 "null" 11 12:19 2 Z 21 12:20 2 Z 21   This is the query I use now only the duration_now resets every time a new event occurs  index=X sourcetype=Y JobID!=”null” |sort _time 0 | stats last(_time) as first_time last(Status) as "First_Status" latest(status) as Last_status latest(_time) as latest_times values(EquipmentID) as Equipment by JobID | eval final_duration = case(Last_status ="null", round(latest_times - first_time,2)) | eval duration_now = case(isnull(final_duration), round(now() - first_time,2)) | eval first_time=strftime(first_time, "%Y-%m-%d %H:%M:%S") | eval latest_times=strftime(latest_times, "%Y-%m-%d %H:%M:%S") | sort - first_time Any help would be greatly appreciated
Hi @faizalabu, to implement HA on HFs, you have to instal at least two HFs in your infrastructure and a Load Balancer that distributes traffic between the HFs and manages failover. Then you should ... See more...
Hi @faizalabu, to implement HA on HFs, you have to instal at least two HFs in your infrastructure and a Load Balancer that distributes traffic between the HFs and manages failover. Then you should manage the HFs using a Deployment Server that guarantees that configurations are always aligned between them. There isn't a cluster of HFs like Indexers or Search Heads. Ciao. Giuseppe
Hi Team,    I want to implement HF as in HA in container setup. can you help here ? 
Since you have been playing around with the search, which search with the 15 minute timeframe are you currently using?
Hello @danielcj,   Thank you for keeping me informed
Hi @ITWhisperer , I reduced timeframe to 15 mins, now I have only few thousand events, but stll query is not giving any output
As I said, subsearches are limited to 50k events - you have 85k events, so the subsearch is not performing as you are expecting. You either need to limit the events your subsearch uses, e.g. change t... See more...
As I said, subsearches are limited to 50k events - you have 85k events, so the subsearch is not performing as you are expecting. You either need to limit the events your subsearch uses, e.g. change the timeframe, or rework your whole search so that it doesn't need a subsearch.
Hi @ITWhisperer , am not sure if this helps, I see 2 fields in statistics result, 1. request_ids -- this is empty 2. search result- this is where I see all this request_ids and results
We have two different sites/regions into Splunk cloud one is Northamerica & other in Europe. There's an ES migration planned in such a way that all the alerts or data reporting to Europe region will ... See more...
We have two different sites/regions into Splunk cloud one is Northamerica & other in Europe. There's an ES migration planned in such a way that all the alerts or data reporting to Europe region will be migrated to NorthAmerica region. & there will be only one ES in Northamerica region.   This is a unique scenario & have never done any such migration, can the community please help me on how to plan such type of migration. Need to prepare a comprehensive plan for this ES migration & highlight all possible changes/modification/risks that needs to be addressed & also need to figure out the dependencies.   Please help here if any insights
Hello, my DB connect displaying this error when I´m trying to access: Can not communicate with task server, check your settings I´ve configured app before and all was working but then I start to ... See more...
Hello, my DB connect displaying this error when I´m trying to access: Can not communicate with task server, check your settings I´ve configured app before and all was working but then I start to receive this error in web app. DB connect app not showing any configured DBs just errors. Can you suggest ? BR
which is that obsolete function?
Hi @ITWhisperer , playing around further search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>\”?[\w-]+\”?)” | stats values(request_id)... See more...
Hi @ITWhisperer , playing around further search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>\”?[\w-]+\”?)” | stats values(request_id) as request_ids | eval request_ids = "\"" . mvjoin(request_ids, "\" OR \"") . "\"" | eval request_ids= replace(request_ids,"^request_id=","") | format  this gives me output like below ( ( request_ids="\"0fb1-4a2-a3-b8b\" OR \"0b99-d2-4e\" OR \"0c2-01a0-454-a3-2f3\"" ) ) but still there is `request_ids` so my main query does not work as expect
For anyone else like me in the future trying to get this to work, the solution from @ITWhisperer is for use in a dashboard. You should be able to get this to work outside a dashboard like so:  |... See more...
For anyone else like me in the future trying to get this to work, the solution from @ITWhisperer is for use in a dashboard. You should be able to get this to work outside a dashboard like so:  | inputlookup test.csv | map search="| makeresults | map search=\"$magic$\""
Hi @ITWhisperer , Sorry didn't get you. I see total 267 events matched out of 85k events. I am not sure if this answers your question
Hi guys, I've tried to setup an alert with two alert actions (email and Slack) from a custom app. When the alert has triggered, 02-09-2024 21:40:04.155 +0000 INFO SavedSplunker - savedsearch_id="n... See more...
Hi guys, I've tried to setup an alert with two alert actions (email and Slack) from a custom app. When the alert has triggered, 02-09-2024 21:40:04.155 +0000 INFO SavedSplunker - savedsearch_id="nobody;abc example alert (NONPRD)", search_type="scheduled", search_streaming=0, user="myself@myself.com", app="abc", savedsearch_name="example (NONPRD)", priority=default, status=success, digest_mode=1, durable_cursor=0, scheduled_time=1707514800, window_time=-1, dispatch_time=xxxxxxxx, run_time=0.884, result_count=2, alert_actions="email", sid="scheduler_xxxxxxxxxx", suppressed=0, thread_id="AlertNotifierWorker-0", workload_pool="standard_perf"   However, i've received email alert but not slack alert, is there anyway to debug why the slack alert was not sent when there are two alert actions? How to know when the webhook URL is correct and working? Can someone please provide the complete steps to troubleshoot issues like this? Thank you! T
@EPitch  Do you mean if the sum of count is > 10 or if the number of distinct name/ip/id combinations is more than 10? If the former, then if you put a  | head 11 after your search, I believe it w... See more...
@EPitch  Do you mean if the sum of count is > 10 or if the number of distinct name/ip/id combinations is more than 10? If the former, then if you put a  | head 11 after your search, I believe it will speed up the search - although it will probably process the query data fully, it will only retain max 11 results, so then if you have stats count and the count is 11 then you have more than 10.  
@interrobang ok, got it The easiest thing to do is to add the following to your dropdown search after the dedup Servername | appendpipe [ stats values(Servername) as Servername | format | ren... See more...
@interrobang ok, got it The easiest thing to do is to add the following to your dropdown search after the dedup Servername | appendpipe [ stats values(Servername) as Servername | format | rename search as Servername | eval name="All" | eval order=0 ] | sort order Servername | fields - order what this simply does is to add a new row at the end with all the server names and creates a new name field. This will either have the Servername or "All". The purpose of the order is to sort the All to the top and then the servers in sorted order. Set the fieldForValue to be Servername and the fieldForLabel to be name. Then if you select All, it will have the Servername=A OR ..  See this example to see how it's working | makeresults | fields - _time | eval Servername=split("ABCD","") | mvexpand Servername | eval name=Servername | eval Servername="Servername".Servername | appendpipe [ stats values(Servername) as Servername | format | rename search as Servername | eval name="All" | eval order=0 ] | sort order Servername | fields - order