All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a requirement to forward search results of a query to an indexer of an external organization. The volume of this data would be fairly high. I understand there are a multiple ways to achieve t... See more...
I have a requirement to forward search results of a query to an indexer of an external organization. The volume of this data would be fairly high. I understand there are a multiple ways to achieve this. I am thinking to use a script to run every 5 mins to grab the search results via REST API and store it locally on the disk and forward it from there via outputs.conf I also understand this would be very to do via script but only challenge is I am not that experienced with scripting stuff, hence little unsure.  Hence, wondering if anyone can please share if there would be an easier way than doing this via a script.
Dear Community,  I am writing a search for windows services. I am trying to find out the number of hosts that having/not having a certain service.   Here is the search that I have  to find out serv... See more...
Dear Community,  I am writing a search for windows services. I am trying to find out the number of hosts that having/not having a certain service.   Here is the search that I have  to find out servers that having the services running:  index=*_oswin sourcetype="WMI:Service" source="WMI:Service" Name="Appdynamics Machine Agent" | dedup host  | stats sum()   How can I do the second part please? Also, I want to integrate those two numbers into one pie chart. Any suggestion is highly appreicated! 
Hi - Was looking for some assistance in extracting the FQDNs from the paths below: /var/log/remote/ldap.inftech.net/2021-08-03/auth.log /var/log/remote/web-proxy-01.int.inftech.net/2021-08-03/proxy... See more...
Hi - Was looking for some assistance in extracting the FQDNs from the paths below: /var/log/remote/ldap.inftech.net/2021-08-03/auth.log /var/log/remote/web-proxy-01.int.inftech.net/2021-08-03/proxy.log /var/log/remote/ns01.inftech.net/2021-08-03/named.log Regex isn't my strongest area, and one of the domains has an additional level, which makes it that much harder for me.
I keep getting the following error when I try to launch the splunk on web browser, how do I resolve this please?  Note: I have a functioning internet Thank you. This site can’t be reached 127.0.0... See more...
I keep getting the following error when I try to launch the splunk on web browser, how do I resolve this please?  Note: I have a functioning internet Thank you. This site can’t be reached 127.0.0.1 refused to connect. Try: Checking the connection   ERR_CONNECTION_REFUSED
Hi, I'm pretty new to Splunk and I'm creating a dashboard for one of my environments.  One thing I can't figure out is how to populate a table with entries from multiple fields into a  table sorted b... See more...
Hi, I'm pretty new to Splunk and I'm creating a dashboard for one of my environments.  One thing I can't figure out is how to populate a table with entries from multiple fields into a  table sorted by host.  So it should look like this. HOST             VOLUME NAMES A                       ARC B                       ARC, LIV, FOR C                      LIV, FOR, FUN The host and all of the volume names come from different fields.  Any help would be greatly appreciated.  
I am trying to create a new process to have a service (non-admin) account adding new Search Heads into a cluster. Specifically, need enough capabilities to to have the service account initialize and ... See more...
I am trying to create a new process to have a service (non-admin) account adding new Search Heads into a cluster. Specifically, need enough capabilities to to have the service account initialize and add a search head to a cluster. I want to avoid giving "admin-all-objects" as its too much privilege and want to adhere to least-priv policy.   I created a new local account and added it to the deployer, SH cluster, and the new SH. I then added capabilities related to SH clustering so that i can have this service account initialize and add SH to a cluster. However, i am getting errors related to permissions. Capabilities added: edit_restmap edit_search_head_clustering edit_search_server edit_server list_search_head_clustering rest_apps_management rest_apps_view rest_properties_get rest_properties_set restart_splunkd     Error when trying to initialize SH:    /opt/splunk/bin/splunk init shcluster-config <CLUSTER INFO>> User 'shcluster_config' with roles { shcluster_config, user-shcluster_config } cannot write: /nobody/system/server { read : [ * ], write : [ admin ] }, removable: no       Anybody know what capability i need to give this service account enough access to add new SHs to a cluster?   Thank you
hi all, I have a specific webhook url which has been used in multiple splunk alerts. Now I want to change that webhook. I was trying to figure out, is there any way I can figure out what are the aler... See more...
hi all, I have a specific webhook url which has been used in multiple splunk alerts. Now I want to change that webhook. I was trying to figure out, is there any way I can figure out what are the alerts which are using this particular webhook
Hi, What is the best query to map this promethues query in splunk query language? Prometheus query: 100*sum_over_time(metric_name_gauge{}[1d:1m])/1440 metric_name_gauge  possible values 0 o... See more...
Hi, What is the best query to map this promethues query in splunk query language? Prometheus query: 100*sum_over_time(metric_name_gauge{}[1d:1m])/1440 metric_name_gauge  possible values 0 or 1. This query adds the values of metric_name_gauge for a period if 1 day with a resolution of 1 minute and then, it divides the result by the number of minutes in a day which is 1440 minutes. --> a period to time of 1d. Any idea How to implement this query using Splunk Query Language... Thanks in advance.
I'm looking to combine data from a lookup file to data from our security server to create a comparison chart between how many alarms we get (security server) and how many of those are acknowledged (l... See more...
I'm looking to combine data from a lookup file to data from our security server to create a comparison chart between how many alarms we get (security server) and how many of those are acknowledged (lookup file). I figured multisearch was the way to go but I'm getting errors when using it. The search is below. The reason for the eval Date fields are because one column contains dates so I needed to get them in the right order since they were always out. The end goal is a daily chart showing x alarms and y acknowledgements.    | multisearch [| inputlookup genericlookupname.csv | eval Date=strptime(Date,"%m/%d/%Y") | sort Date | eval Date=strftime(Date,"%m/%d/%Y")] [search index=index EVDESCR="alarm"]    
Hi, I have made Splunk Dashboard using Network Diagram viz (below snip). Requirement was to click on nodes so that it redirect to another URL or dashboard, which I have achieved using : Drilldown -> ... See more...
Hi, I have made Splunk Dashboard using Network Diagram viz (below snip). Requirement was to click on nodes so that it redirect to another URL or dashboard, which I have achieved using : Drilldown -> Link to Custom url -> https://xyz.splunkcloud.com/en-US/app/$row.value|n$ Issue is if i am clicking on white space in network diagram it is also taking me to another url which should not happen. I only need nodes in diagram to be clickable not white space. Any insight into this will be very helpful. spl:    | makeresults count=12 | streamstats count as id | eval from=case(id=1,"Machine1", id=2,"Machine2", id=3,"Load Balancer", id=4,"Machine3", id=5,"Machine4", id=6,"Web Server", id=7,"User", id=8,"Database") | eval to=case(id=1,"Load Balancer",id=2,"Load Balancer",id=3,"Web Server",id=4,"Load Balancer",id=5,"Load Balancer",id=6,"Database",id=7,"Database") | eval value=case(id=1,"HEALTHY",id=2,"HEALTHY",id=3,"WARNING",id=4,"WARNING",id=5,"UNHEALTHY",id=6,"WARNING",id=7,"HEALTHY",1=1,"No Data") | eval color=case(value=="HEALTHY","green",value=="WARNING","yellow",value=="HIGH","orange",value=="UNHEALTHY","red",value=="No Data","grey") | eval value=case(id=1,"machine1_dashboard",id=2,"machine2_dashboard",id=3,"load_balancer_dashboard") | fields from, to, value, color   Thanks.
index=error sourcetype=error_log "Retry counter reached" | makemv delim="=",values | dedup errId | table errId         | map search="search index=error sourcetype=error_log $errId$ "Caused by" | ... See more...
index=error sourcetype=error_log "Retry counter reached" | makemv delim="=",values | dedup errId | table errId         | map search="search index=error sourcetype=error_log $errId$ "Caused by" | head 1 | rex field=_raw  "MessageText=(?<FailureReason>.+) Please report to system admin"        | eval FailureReason=\"$FailureReason$\"        | eval errId=\"$errId$\"" | table errId, FailureReason The above query does not show any results. If i run the searches separately, i do see the output. What is wrong with the query please?
Hi everyone, I have a very basic search outputting two types of entries into a field called "event". I need to get a count of each type per hour. I've been able to get the view I want using the pivo... See more...
Hi everyone, I have a very basic search outputting two types of entries into a field called "event". I need to get a count of each type per hour. I've been able to get the view I want using the pivot but don't really want to burden the system maintaining the data model if I don't need to. So here's my question: How can I create a table (assuming using stats) to show two rows (one for each type) and columns for each hour's total (descending)?   Desired format: Desired format using pivot Current output when I try to use stats:  Current stats output
Currently running a script on  server querying a servers availability, the result of the script is "200" or other codes if the environment is not available..  How can I create a visualization that... See more...
Currently running a script on  server querying a servers availability, the result of the script is "200" or other codes if the environment is not available..  How can I create a visualization that shows that percentage of the environments availability of 30 days ?
Hi developer Thanks you for the great apps do you have plan to upgrade Netcool addon apps currently on v3 to compatible with Splunk 8.2 python 3? Chamrong
Hi, We setup an F5 VIP to load balance syslog input to several heavy forwarders on UDP 514.  We're successfully receiving syslog events through the F5 VIP from several sources, but for some reason t... See more...
Hi, We setup an F5 VIP to load balance syslog input to several heavy forwarders on UDP 514.  We're successfully receiving syslog events through the F5 VIP from several sources, but for some reason the syslogs from our vmware environment are not being accepted.  Network tracing on the F5 VIP shows vmware sources making connections to the front end VIP and the back-end HF's, but the syslogs are not being accepted and processed by the HF's.  We've taken one VMWare server and directed syslogs straight to one of the HF's (bypassing the F5), and this works.  Any suggestions on what might be happening when sending the vmware syslogs through the F5 that would cause them to not be accepted\received by the HF's?  The inputs.conf file has also been configured with all the VMware sources to accept syslog input from. Thank you.
I have events coming from an API that all have the same 10 fields.  Viewing the RAW event one of the fields (detail) is quote escaped JSON (\").  The contents of the field varies and I cannot get con... See more...
I have events coming from an API that all have the same 10 fields.  Viewing the RAW event one of the fields (detail) is quote escaped JSON (\").  The contents of the field varies and I cannot get consistent parsing via configuration files.   The props.conf does already include KV_MODE = json   If I add | spath input=detail to the SPL it parses perfectly, but I need to do the parsing from the config files so I can build Datamodels.   Since KV's vary across events parsing the whole detail field verses regex's on specifc KV's seems to be more efficient.   I've had limited success using a regex in transforms.conf.  And I think trying to use the | eval details = spath(X,Y) won't work because there are multiple keys and values.  Some sample events are below. {"edgeName": "DVC_NAME", "enterpriseUsername": null, "event": "EDGE_NEW_DEVICE", "category": "EDGE", "id": 12345678, "segmentName": null, "severity": "NOTICE", "eventTime": "2021-08-03T13:21:31.000Z", "message": "New or updated client device 01:23:45:67:ab:ef, ip 192.168.0.100, segId 0, hostname NT_HOSTNAME, os", "detail": "{\"last_request_time\":0,\"client_mac\":\"01:23:45:67:ab:ef\",\"client_ipv4addr\":\"192.168.0.100\",\"hostname\":\"NT_HOSTNAME\",\"os_type\":0,\"os_class\":0,\"os_class_name\":\"UNKNOWN\",\"os_version\":\"\",\"device_type\":\"\",\"os_description\":\"\",\"dhcp_param_list\":\"1,3,6,15,31,33,43,44,46,47,119,121,249,252\",\"segment_id\":0}"} {"id": 73646231, "severity": "INFO", "eventTime": "2021-08-03T06:36:31.000Z", "segmentName": null, "message": "Edge [DVC_NAME] has re-established communication with the Orchestrator", "category": "EDGE", "event": "EDGE_UP", "enterpriseUsername": null, "detail": "{\"enterpriseAlertConfigurationId\":null,\"enterpriseId\":316,\"edgeId\":8748,\"edgeName\":\"DVC_NAME\",\"state\":\"PENDING\",\"stateSetTime\":\"2021-08-03T06:36:30.867Z\",\"triggerTime\":\"2021-08-03T06:36:30.867Z\",\"remainingNotifications\":1,\"nextNotificationTime\":\"2021-08-03T06:36:30.867Z\",\"lastContact\":\"2021-08-03T06:36:29.000Z\",\"name\":\"EDGE_UP\",\"type\":\"EDGE_UP\",\"firstNotificationSeconds\":0,\"maxNotifications\":1,\"notificationIntervalSeconds\":120,\"resetIntervalSeconds\":3600,\"timezone\":\"America/Phoenix\",\"locale\":null}", "edgeName": "DVC_NAME"} {"edgeName": "DVC_NAME", "id": 73579676, "eventTime": "2021-08-02T23:24:58.000Z", "event": "MGD_CONF_APPLIED", "severity": "INFO", "segmentName": null, "enterpriseUsername": null, "detail": "{\"heartBeatSeconds\": 30, \"managementPlaneProxy\": {\"drHeartbeatSecs\": 60, \"primary\": \"host-1.domain.net\", \"secondary\": \"host-2.domain.net\"}, \"timeSliceSeconds\": 300, \"statsUploadSeconds\": 300}", "message": "Applied new configuration for managementPlane version 1627946184323", "category": "EDGE"}
I have the following scenario where duplicate accounts has been created for a transaction id value. I would like to count how many duplicates has been created and list it as a table. I compare the me... See more...
I have the following scenario where duplicate accounts has been created for a transaction id value. I would like to count how many duplicates has been created and list it as a table. I compare the message with a string, which indicates the successful creation of the account. The current query is as follows:   index=myindex sourcetype=mysourcetype | spath message | search message="Account Created Successfully" |stats count by transactionId   I have the following format for logs   { level: info message: Account Created Successfully timestamp: 2021-08-02T05:58:44-04:00 transactionId: 100200300 }     The above search query is not giving me the correct counts. I manually checked the logs for the transaction ID, but the `stats` count is wrong. How can I modify the query to get accurate counts ?
Hi Splunk community, I am having trouble creating an embed from a saved report.  The website is throwing a 404 error when I click “Enable Embeding”. Its throwing this error inside the 404:   {"me... See more...
Hi Splunk community, I am having trouble creating an embed from a saved report.  The website is throwing a 404 error when I click “Enable Embeding”. Its throwing this error inside the 404:   {"messages":[{"type":"ERROR","text":"Cannot find saved search with name 'NameRedacted'."}]}   I have also attached a gif to show exactly where this error is happening
Hi, How would I write Time_FORMAT and TIME_PREFIX for my Props Conf file for the following sample events. Any help will be highly appreciated. Thank you so much. RTJCB|DEMOEE|AFFR|ANALYST   |VIEWSU... See more...
Hi, How would I write Time_FORMAT and TIME_PREFIX for my Props Conf file for the following sample events. Any help will be highly appreciated. Thank you so much. RTJCB|DEMOEE|AFFR|ANALYST   |VIEWSUMMARY    |XYA565656873                ||12.214.61.90|00|                                                            |20210730 13:00:26:907|   |000000|030|ACMF|0|  STJCB|DEMOEE|AFFR|ANALYST   |VIEWCASE       |YNA565656873                ||12.214.61.90|00|                                                            |20210730 13:00:29:045|      |000000|030|ACMF|0|      TRJCB|DEMO|AFFR|ANALYST   |VIEWSUMMARY    |XBC565656873                ||12.214.61.90|00|                                                            |20210730 13:00:30:421|       |000000|030|ACMF|0|  RXJCB|DEMOEE|AFFR|ANALYST   |VIEWCASE       |DCN132748456                ||12.214.61.90|00|                                                            |20210730 13:00:40:273|     |201512|030|ACMF|0|     DSJCB|DEMOEE|AFFR|ANALYST   |UPDATECASE     |CBB132748456                ||12.214.61.90|01|Attempt to update to an code                 |20210730 13:00:47:347|        |201512|030|ACMF|0|              RXJCB|DEMOEE|AFFR|ANALYST   |VIEWCASE       |ABB132748456                ||12.214.61.90|00|                                                            |20210730 13:00:48:519|          |201512|030|ACMF|0|           
Hello All,   We have an index cluster which utilizes SmartStore in AWS S3.  Things appear to be working but we have observed the following in our logs on the index peers. 08-02-2021 12:34:53.725 -... See more...
Hello All,   We have an index cluster which utilizes SmartStore in AWS S3.  Things appear to be working but we have observed the following in our logs on the index peers. 08-02-2021 12:34:53.725 -0400 ERROR IndexerIf [22952 FilesystemOpExecutorWorker-0] - failed to update bucket bid=sysandy_test2~2~32FED627-479E-41CF-A401-2F061C2EF7E5 with remote metadata due to err= 08-02-2021 12:45:04.668 -0400 ERROR IndexerIf [24603 FilesystemOpExecutorWorker-0] - failed to update bucket bid=sysandy_test2~3~32FED627-479E-41CF-A401-2F061C2EF7E5 with remote metadata due to err= 08-02-2021 12:55:21.684 -0400 ERROR IndexerIf [25930 FilesystemOpExecutorWorker-0] - failed to update bucket bid=sysandy_test2~1~80593238-39FB-443D-8E13-8FC3E521B22C with remote metadata due to err= It appears that these errors sometimes occur only on one or two of the three index peers but result in a tsidx file that that is different locally then the S3 copy.   It is unclear as to why this is happening and the err= appears to be blank. Has anyone ever seen this behavior and any suggestions for resolving this ? Thanks.