All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

The reason why this failed on after first run is that you have changed admin password to different that you have configured on your docker conf file. When it try to login into splunk via REST endpoin... See more...
The reason why this failed on after first run is that you have changed admin password to different that you have configured on your docker conf file. When it try to login into splunk via REST endpoint with user and password it cannot as it has old password. You could fix this by changing your current admin password into docker config file and run it again.
Hello experts, for IWSVA , is there any specific sourcetype that we can select.
Why does the URA not update itself efter a scan? I've had several apps installed for more than 2 weeks, and still I get the same message: ---------------------------------------- Details This... See more...
Why does the URA not update itself efter a scan? I've had several apps installed for more than 2 weeks, and still I get the same message: ---------------------------------------- Details This newly installed App has not completed the necessary scan. Version 1.1.6 Application Path /opt/splunk/etc/apps/it_essentials_learn Required Action Please check again in 24 hours when the necessary scan is complete. --------------------------------------- Even if I force a scan, nothing changes.
Ok. There are three ways of resolving this. 1. Preferred - define extractions for the needed fields. It's most probably not the only time you're gonna be using them. 2. Add the subsearch further do... See more...
Ok. There are three ways of resolving this. 1. Preferred - define extractions for the needed fields. It's most probably not the only time you're gonna be using them. 2. Add the subsearch further down the search pipeline. This is a bad idea because you'd he first extracting the field from all events and filtering the events only after that. Waste of resources. 3. Rework your subsearch so that you manually create a set of conditions to be inserted "as is" into the main search and return that as a single value of a field called "search". Both latter solutions are overly complicated and/or inefficient so I'd advise you to properly extract the fields in the first place.
I have a filter of Entity which has token t_entity and in drilldown it has All, C2V ,C2C and Cases . And I have different panels of this which is showing counts. I have a separate panel of C2V counts... See more...
I have a filter of Entity which has token t_entity and in drilldown it has All, C2V ,C2C and Cases . And I have different panels of this which is showing counts. I have a separate panel of C2V counts which I only want to show when it is selected from the filter . Filter name-Entity Token Name- t_entity How is this possible to show a panel when we select it from the filter.
_get_all_passwords is deprecated, please use get_all_passwords_in_realm instead.
If I understand your requirements correctly, the easiest approach would be to use the transaction command with relatively low thresholds for transaction continuity. But the transaction command is rel... See more...
If I understand your requirements correctly, the easiest approach would be to use the transaction command with relatively low thresholds for transaction continuity. But the transaction command is relatively resource-intensive so you might want to try streamstats-based approach instead.
If I understand correctly, you want to know when the equipment changed to its current status so long as the current status is not "null"? Try something like this: | eventstats last(Status) as lastS... See more...
If I understand correctly, you want to know when the equipment changed to its current status so long as the current status is not "null"? Try something like this: | eventstats last(Status) as lastStatus by EquipmentID | where lastStatus!="null" | streamstats last(Status) as previous current=f global=f by EquipmentID | where Status=lastStatus and Status != previous | stats last(_time) as lastTime last(Status) as lastStatus by EquipmentID | eval duration=now()-lastTime
Hello, I have the following data:  I want to use this data to setup a dashboard. In this dashboard I want to show the current duration of equipment where the Status is not "null" (null is a string... See more...
Hello, I have the following data:  I want to use this data to setup a dashboard. In this dashboard I want to show the current duration of equipment where the Status is not "null" (null is a string in this case and not a null value) Each JobID only has one EquipmentID The same status can occur and disappear multiple times per JobID There are around 10 different status I want to the results to show only durations above 60 seconds If the current time is 12:21 I would like the to look like this. EquipmentID   Duration Most_recent_status 2 120 Z   Time EquipmentID Status JobID 12:00 1 "null" 10 12:01 2 "null" 20 12:02 2 X 20 12:03 2 X 20 12:04 1 X 10 12:05 1 Y 10 12:06 1 Y 20 12:07 2 Y 20 12:08 1 X 10 12:09 2 Y 20 12:10 1 "null" 11 12:11 2 "null" 21 12:12 2 "null" 21 12:13 1 "null" 11 12:14 1 "null" 11 12:15 2 X 21 12:16 1 X 11 12:17 2 X 21 12:18 1 "null" 11 12:19 2 Z 21 12:20 2 Z 21   This is the query I use now only the duration_now resets every time a new event occurs  index=X sourcetype=Y JobID!=”null” |sort _time 0 | stats last(_time) as first_time last(Status) as "First_Status" latest(status) as Last_status latest(_time) as latest_times values(EquipmentID) as Equipment by JobID | eval final_duration = case(Last_status ="null", round(latest_times - first_time,2)) | eval duration_now = case(isnull(final_duration), round(now() - first_time,2)) | eval first_time=strftime(first_time, "%Y-%m-%d %H:%M:%S") | eval latest_times=strftime(latest_times, "%Y-%m-%d %H:%M:%S") | sort - first_time Any help would be greatly appreciated
Hi @faizalabu, to implement HA on HFs, you have to instal at least two HFs in your infrastructure and a Load Balancer that distributes traffic between the HFs and manages failover. Then you should ... See more...
Hi @faizalabu, to implement HA on HFs, you have to instal at least two HFs in your infrastructure and a Load Balancer that distributes traffic between the HFs and manages failover. Then you should manage the HFs using a Deployment Server that guarantees that configurations are always aligned between them. There isn't a cluster of HFs like Indexers or Search Heads. Ciao. Giuseppe
Hi Team,    I want to implement HF as in HA in container setup. can you help here ? 
Since you have been playing around with the search, which search with the 15 minute timeframe are you currently using?
Hello @danielcj,   Thank you for keeping me informed
Hi @ITWhisperer , I reduced timeframe to 15 mins, now I have only few thousand events, but stll query is not giving any output
As I said, subsearches are limited to 50k events - you have 85k events, so the subsearch is not performing as you are expecting. You either need to limit the events your subsearch uses, e.g. change t... See more...
As I said, subsearches are limited to 50k events - you have 85k events, so the subsearch is not performing as you are expecting. You either need to limit the events your subsearch uses, e.g. change the timeframe, or rework your whole search so that it doesn't need a subsearch.
Hi @ITWhisperer , am not sure if this helps, I see 2 fields in statistics result, 1. request_ids -- this is empty 2. search result- this is where I see all this request_ids and results
We have two different sites/regions into Splunk cloud one is Northamerica & other in Europe. There's an ES migration planned in such a way that all the alerts or data reporting to Europe region will ... See more...
We have two different sites/regions into Splunk cloud one is Northamerica & other in Europe. There's an ES migration planned in such a way that all the alerts or data reporting to Europe region will be migrated to NorthAmerica region. & there will be only one ES in Northamerica region.   This is a unique scenario & have never done any such migration, can the community please help me on how to plan such type of migration. Need to prepare a comprehensive plan for this ES migration & highlight all possible changes/modification/risks that needs to be addressed & also need to figure out the dependencies.   Please help here if any insights
Hello, my DB connect displaying this error when I´m trying to access: Can not communicate with task server, check your settings I´ve configured app before and all was working but then I start to ... See more...
Hello, my DB connect displaying this error when I´m trying to access: Can not communicate with task server, check your settings I´ve configured app before and all was working but then I start to receive this error in web app. DB connect app not showing any configured DBs just errors. Can you suggest ? BR
which is that obsolete function?
Hi @ITWhisperer , playing around further search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>\”?[\w-]+\”?)” | stats values(request_id)... See more...
Hi @ITWhisperer , playing around further search sourcetype="my_source" "failed request, request id=" | rex “failed request, request id==(?<request_id>\”?[\w-]+\”?)” | stats values(request_id) as request_ids | eval request_ids = "\"" . mvjoin(request_ids, "\" OR \"") . "\"" | eval request_ids= replace(request_ids,"^request_id=","") | format  this gives me output like below ( ( request_ids="\"0fb1-4a2-a3-b8b\" OR \"0b99-d2-4e\" OR \"0c2-01a0-454-a3-2f3\"" ) ) but still there is `request_ids` so my main query does not work as expect