All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Hi Mario, I'm trying to pull a list of the appids via curl using access token. Getting HTTP Status 405, Method Not Allowed, thoughts? curl -X POST -H "Content-Type: application/json;charset=UTF-9... See more...
Hi Mario, I'm trying to pull a list of the appids via curl using access token. Getting HTTP Status 405, Method Not Allowed, thoughts? curl -X POST -H "Content-Type: application/json;charset=UTF-9" "https://controller:443/controller/restui/eumApplications/getAllEumApplicationsData?time-range=last_1_hour.BEFORE_NOW.-1.-1.60" --header "Authorization: Bearer ${access_token}"
Hello, I am trying to use one cluster map to visualize the locations of a user's source and destination IPs for Duo logs. Currently, I have two separate cluster maps for each. Source IP Address Que... See more...
Hello, I am trying to use one cluster map to visualize the locations of a user's source and destination IPs for Duo logs. Currently, I have two separate cluster maps for each. Source IP Address Query: index="duo" extracted_eventtype=authentication_v2 user.name="$user.name$" access_device.ip!="NULL" | iplocation access_device.ip | geostats count by City   Destination IP Address Query: index="duo" extracted_eventtype=authentication_v2 user.name="$user.name$" auth_device.ip!="NULL" | iplocation auth_device.ip | geostats count by City   I'm somewhat new to visualizations and dashboarding, and was hoping for some assistance on writing a combined query that would display both source and destination IPs on a cluster map.
Im completely green using SPLUNK, I have downloaded enterprise, have a profile but I cannot seem to get it configured to work off of my home system. i.e testing SPLUNK on myself. The steps stop worki... See more...
Im completely green using SPLUNK, I have downloaded enterprise, have a profile but I cannot seem to get it configured to work off of my home system. i.e testing SPLUNK on myself. The steps stop working for at the step for :"Configure the universal forwarder using configuration files"  Im not understanding how to access the config settings through the CLI to move beyond this step.
Hello.  I'm trying to send log from heavy forwarder to 2 indexes. One is receiving logs, but the second is not. Here is the props.conf file: [test] TRANSFORMS-routing=errorRouting,successRouti... See more...
Hello.  I'm trying to send log from heavy forwarder to 2 indexes. One is receiving logs, but the second is not. Here is the props.conf file: [test] TRANSFORMS-routing=errorRouting,successRouting   Here is the outputs.conf file: [tcpout:errorGroup] server = 35.196.124.233:9997 [tcpout:successGroup] server = 34.138.8.216:9997   Here is the transforms.conf file: [errorRouting] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=errorGroup [successRouting] REGEX=. DEST_KEY=_TCP_ROUTING FORMAT=successGroup What could be the problem?    
It is not the number of results that matter, it is the number of events returned by the first part of the search that you need to check index=summary source="abc1" OR source="abc2"
The results have less than 10000 events in both the subsearches. I have off of my system now, but I will try multisearch tomorrow. Let's see if it works.
Subsearches are limited to 50,000 events - could this be the reason your subsearch is not showing any results? Have you tried a shorter timeframe, or tried fragmenting your subsearch in some way, e.... See more...
Subsearches are limited to 50,000 events - could this be the reason your subsearch is not showing any results? Have you tried a shorter timeframe, or tried fragmenting your subsearch in some way, e.g. splitting by source?
Wow... This worked! Thank you very much. This has been a journey.  Looks like I just need to shorten my relative time to avoid the max results and timeouts for the "map" function but that is totally... See more...
Wow... This worked! Thank you very much. This has been a journey.  Looks like I just need to shorten my relative time to avoid the max results and timeouts for the "map" function but that is totally worth it. 
Hi, I have several dashboards that are utilizing the custom JS to create tabs from this 2015 Splunk blog: Making a dashboard with tabs (and searches that run when clicked) | Splunk The custom JS fro... See more...
Hi, I have several dashboards that are utilizing the custom JS to create tabs from this 2015 Splunk blog: Making a dashboard with tabs (and searches that run when clicked) | Splunk The custom JS from the blog and the tabs worked perfectly on Splunk versions 8.1.3; however, after upgrading to version 9.1.0.1. The custom JavaScript that powers the tabs no longer works. When loaded, the dashboards show an error that says: "A custom JavaScript error caused an issue loading your dashboard."  Does anyone know how to update the JavaScript from this blog post to be compatible with Splunk version 9.1.0.1 and JQuery 3.5? I have not been able to find any other Splunk questions referencing this same issue. I can provide the full JavaScript from the blog post in this message if necessary. It is also on the blog post. 
Yes, I have results from both the subsearches. But, still I don't see Counter in the results which is weird.
Hello all, We are getting error in analytics agent part of SAP ABAP Status logs as given below. - Analytics agent connection: > ERROR: connection check failed: > Ping: 1 ms > HTTP response ended... See more...
Hello all, We are getting error in analytics agent part of SAP ABAP Status logs as given below. - Analytics agent connection: > ERROR: connection check failed: > Ping: 1 ms > HTTP response ended with error: HTTP communication failed (code 411: Connect to eutehtas001:9090 failed: NIECONN_REFUSED(-10)) > HTTP server ******* (URI '/_ping') responded with status code 404 (Connection Refused) > Analytics Agent was not reached by server *********_EHT_11V1http://**********:9090. Is it running? Can anyone tell me why this error is occurring? Regards, Abhiram
There doesn't appear to be anything wrong with the search as you have presented it - are you certain you have results from the subsearch index=summary source="abc1" OR source="abc2" | timechart span... See more...
There doesn't appear to be anything wrong with the search as you have presented it - are you certain you have results from the subsearch index=summary source="abc1" OR source="abc2" | timechart span=1h sum(xyz) as Counter | table Counter
I want to allow users of a specific role to be able to access the user dropdown menu so that they can logout, but I want to prevent their ability to modify the user preferences (time zone, default ap... See more...
I want to allow users of a specific role to be able to access the user dropdown menu so that they can logout, but I want to prevent their ability to modify the user preferences (time zone, default app, etc.). I have determined how to prevent these users from modifying their own password with role capabilities. Without the Role capability list_all_objects, the user dropdown doesn't display making the ability to logout difficult. With the list_all_objects capability active for the role the user now has access to adjust the user preferences. Thanks in advance!
I am trying to get data from 2 indexes and combine them via appendcols. The search is  index="anon" sourcetype="test1" localDn=*aaa* | fillnull release_resp_succ update_resp_succ release_req u... See more...
I am trying to get data from 2 indexes and combine them via appendcols. The search is  index="anon" sourcetype="test1" localDn=*aaa* | fillnull release_resp_succ update_resp_succ release_req update_req n40_msg_written_to_disk create_req value=0 | eval Number_of_expected_CDRs = release_req+update_req | eval Succ_CDRs=release_resp_succ+update_resp_succ | eval Missing_CDRs=Number_of_expected_CDRs-Succ_CDRs-n40_msg_written_to_disk | timechart span=1h sum(Number_of_expected_CDRs) as Expected_CDRs sum(Succ_CDRs) as Successful_CDRs sum(Missing_CDRs) as Missing_CDRs sum(n40_msg_written_to_disk) as Written sum(create_req) as Create_Request | eval Missed_CDRs_%=round((Missing_CDRs/Expected_CDRs)*100,2) | eval Missed_CDRs_%=round((Missing_CDRs/Expected_CDRs)*100,2) | table * | appendcols [| search index=summary source="abc1" OR source="abc2" | timechart span=1h sum(xyz) as Counter | table Counter] But, I am getting output from just the first search. The appendcols  search is just not giving the Counter field in the output. 
Try something like this index=anon_index source="*source.log" "message was published to * successfully" | dedup MSGID | append [search index="index2" "ROUTING failed. Result sent to * response que... See more...
Try something like this index=anon_index source="*source.log" "message was published to * successfully" | dedup MSGID | append [search index="index2" "ROUTING failed. Result sent to * response queue tjhis.MSG.RES.xxx" source=MSGTBL* OR source=MSG_RESPONSE OR source=*source.log | dedup MSGID] | stats count as firstcount | appendcols [ search index=anon_index source="*source.log" "call to ODM at */file_routing" | dedup MSGID | stats count as secondcount] | eval diff=firstcount-secondcount
yeah thank you in advance 
Try something like this | rex field=MessageText max_match=0 "(?<SequenceNumber>\d+)" | table SequenceNumber | eval fullSequence = mvrange(tonumber(mvindex(SequenceNumber,0)),tonumber(mvindex(Sequenc... See more...
Try something like this | rex field=MessageText max_match=0 "(?<SequenceNumber>\d+)" | table SequenceNumber | eval fullSequence = mvrange(tonumber(mvindex(SequenceNumber,0)),tonumber(mvindex(SequenceNumber,-1))+1) | eval missing=mvmap(fullSequence,if(isnull(mvfind(SequenceNumber,fullSequence)),fullSequence,NULL()))
Hi all, We have a source which comes in via HEC into an index. The sourcetyping currently is dynamic. We then route data based on a indexed label the data to a specific index. Here comes the ... See more...
Hi all, We have a source which comes in via HEC into an index. The sourcetyping currently is dynamic. We then route data based on a indexed label the data to a specific index. Here comes the catch. If we have another indexed field called label we want to clone that event into a new index and sourcetype   props.conf     [(?::){0}kube:container:*] TRANSFORMS-route_by_domain_label = route_by_domain_label       transforms.conf We route the data based on a label which is custom named k8s_label for the example here and for sensitive data we also have a label called : label_sensitive      [route_index_by_label_domain] SOURCE_KEY = field:k8s_label REGEX = index_domain_(\w+) FORMAT = indexname_$1 DEST_KEY = _MetaData:Index [clone_when_sensitive] SOURCE_KEY = field:label_sensitive REGEX = true DEST_KEY = _MetaData:Sourcetype #CLONE_SOURCETYPE = sensitive_events FORMAT = sourcetype::sensitive_events    
Just released in Splunk Enterprise Security 7.2.0, this is now a feature. Splunk Idea ESSID-I-67: Ability to configure multiple drill-down searches for notable
Hi @vikas_gopal, the main problem is that probably you have the backup in clustered format: I'm not sure that it's possible to restore it without a cluster! Let me know if I can help you more. Cia... See more...
Hi @vikas_gopal, the main problem is that probably you have the backup in clustered format: I'm not sure that it's possible to restore it without a cluster! Let me know if I can help you more. Ciao. Giuseppe P.S.: Karma Points are appeciated