All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have setup local splunk instance(8.2) and have installed add-on ' splunk add-on for microsoft cloud services'(4.1.3). I have already created eventhub and have added Azure Event Hubs Data Owner perm... See more...
I have setup local splunk instance(8.2) and have installed add-on ' splunk add-on for microsoft cloud services'(4.1.3). I have already created eventhub and have added Azure Event Hubs Data Owner permission to my app in order to be able to access eventhub in azure. I am able to add azure app account in the configuration tab but while trying to add input, the splunk gui keeps on loading and am not able to add any inputs. When I checked the logs I found 404 errors. Below is a stack trace of the error. 06-28-2021 14:58:08.717 -0400 ERROR AdminManagerExternal [67673713 TcpChannelThread] - Stack trace from python handler:\nTraceback (most recent call last):\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/rest_handler/handler.py", line 117, in wrapper\n for name, data, acl in meth(self, *args, **kwargs):\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/rest_handler/handler.py", line 172, in all\n **query\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunklib/binding.py", line 290, in wrapper\n return request_fun(self, *args, **kwargs)\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunklib/binding.py", line 71, in new_f\n val = f(*args, **kwargs)\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunklib/binding.py", line 686, in get\n response = self.http.get(path, all_headers, **query)\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunklib/binding.py", line 1194, in get\n return self.request(url, { 'method': "GET", 'headers': headers })\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunklib/binding.py", line 1255, in request\n raise HTTPError(response)\nsplunklib.binding.HTTPError: HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'\n\nDuring handling of the above exception, another exception occurred:\n\nTraceback (most recent call last):\n File "/Applications/Splunk/lib/python3.7/site-packages/splunk/admin.py", line 114, in init_persistent\n hand.execute(info)\n File "/Applications/Splunk/lib/python3.7/site-packages/splunk/admin.py", line 637, in execute\n if self.requestedAction == ACTION_LIST: self.handleList(confInfo)\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/bin/splunk_ta_mscs_rh_mscs_storage_table.py", line 111, in handleList\n AdminExternalHandler.handleList(self, confInfo)\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/rest_handler/admin_external.py", line 51, in wrapper\n for entity in result:\n File "/Applications/Splunk/etc/apps/Splunk_TA_microsoft-cloudservices/lib/splunktaucclib/rest_handler/handler.py", line 122, in wrapper\n raise RestError(exc.status, str(exc))\nsplunktaucclib.rest_handler.error.RestError: REST Error [404]: Not Found -- HTTP 404 Not Found -- b'{"messages":[{"type":"ERROR","text":"Not Found"}]}'\n     Can someone please provide any suggestions on this ? @jkat54 , @chli_splunk  , @bradp1234 , @tmontney 
Hi All, Currently, I try to implement cluster-operator and cluster-agent on K8s. the cluster metric shows under the "Server/Cluster" menu correctly. However, I would like to config auto-instrument f... See more...
Hi All, Currently, I try to implement cluster-operator and cluster-agent on K8s. the cluster metric shows under the "Server/Cluster" menu correctly. However, I would like to config auto-instrument for Java application but no hope follow the validate guide here is still not much helpful... https://docs.appdynamics.com/21.5/en/infrastructure-visibility/monitor-kubernetes-with-the-cluster-agent/auto-instrument-applications-with-the-cluster-agent/validate-auto-instrumentation#ValidateAuto-Instrumentation-TroubleshootAuto-InstrumentationWhenNotApplied apiVersion: appdynamics.com/v1alpha1 kind: Clusteragent metadata: name: k8s-cluster-agent namespace: appdynamics spec: appName: "TEST-CLUSTER" controllerUrl: "https://xxx.appdynamics.com" account: "xxxdev" logLevel: DEBUG logFileSizeMb: 7 logFileBackups: 6 # docker image info image: "my.own.repository.org:5000/appdynamics/cluster-agent:latest" serviceAccountName: appdynamics-cluster-agent nsToMonitor: - "appdynamics" - "cma-dev" - "cma-qa" # # auto-instrumentation config # instrumentationMethod: Env nsToInstrumentRegex: "xxx-dev|xxx-qa" appNameStrategy: label defaultAppName: not-specific defaultCustomConfig: "-Dappdynamics.agent.maxMetrics=15000" defaultEnv: JAVA_TOOL_OPTIONS resourcesToInstrument: - Deployment imageInfo: java: image: "my.own.repository.org:5000/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics instrumentationRules: - namespaceRegex: "xxx-dev" language: java env: JAVA_TOOL_OPTIONS appNameLabel: app instrumentContainer: first imageInfo: image: "my.own.repository.org:5000/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics - namespaceRegex: "xxx-qa" language: java env: JAVA_TOOL_OPTIONS appNameLabel: app instrumentContainer: first imageInfo: image: "my.own.repository.org:5000/appdynamics/java-agent:latest" agentMountPath: /opt/appdynamics But I got this.... [DEBUG]: 2021-06-28 16:12:36 - instrumentationconfig.go:628 - rule xxx-qa matches Deployment spring-api-productprofile in namespace xxx-qa with labels map[app:spring-api-productprofile] [DEBUG]: 2021-06-28 16:12:36 - instrumentationconfig.go:639 - Found a matching rule {xxx-qa map[] java first -Dappdynamics.agent.maxMetrics=15000 JAVA_TOOL_OPTIONS map[agent-mount-path:/opt/appdynamics image:my-own-repository.org:5000/appdynamics/java-agent:latest] map[bci-enabled:true port:3892] 0 0 app 0 false []} for Deployment spring-api-productprofile in namespace xxx-qa with labels map[app:spring-api-productprofile] [DEBUG]: 2021-06-28 16:12:36 - instrumentationconfig.go:249 - Instrumentation state for Deployment spring-api-productprofile in namespace xxx-qa with labels map[app:spring-api-productprofile] is false [DEBUG]: 2021-06-28 16:12:36 - instrumentationconfig.go:628 - rule xxx-dev matches Deployment spring-api-deliveries in namespace xxx-dev with labels map[app:spring-api-deliveries] [DEBUG]: 2021-06-28 16:12:36 - instrumentationconfig.go:639 - Found a matching rule {xxx-dev map[] java first -Dappdynamics.agent.maxMetrics=15000 JAVA_TOOL_OPTIONS map[agent-mount-path:/opt/appdynamics image:my-own-repository.org:5000/appdynamics/java-agent:latest] map[bci-enabled:true port:3892] 0 0 app 0 false []} for Deployment spring-api-deliveries in namespace xxx-dev with labels map[app:spring-api-deliveries] [DEBUG]: 2021-06-28 16:12:36 - instrumentationconfig.go:249 - Instrumentation state for Deployment spring-api-deliveries in namespace xxx-dev with labels map[app:spring-api-deliveries] is false Please advice
Hello Members, I have a requirement, in which i have 5 servers, in which i want to send and alert.  In which five server are : a, b,c,d,e,f I want to set an alert in which when the CPU utilizatio... See more...
Hello Members, I have a requirement, in which i have 5 servers, in which i want to send and alert.  In which five server are : a, b,c,d,e,f I want to set an alert in which when the CPU utilization is high on server a --- then alert send to one specific email id @@abc.splunk.com  and for the other 4 servers i want to send an alert on email group---xyz@splunk.com. In splunk alert setting there is no option, we need to put through SPL by using the eval, but it is not working for me. I have tried as below : | eval condition_alert=if(server == "a", "abc.splunk.com", "xyz@splunk.com") --- it is not working | eval condition_alert=if(LIKE(server,"%a%"),"abc.splunk.com", "xyz@splunk.com") --- it is also not working.   Please suggest me the solution.    
Creating a dashboard to track when users badge into and out of different areas. Problem: If I do a basic search for a user_id, I get back multiple listings for that user with different timestamps fo... See more...
Creating a dashboard to track when users badge into and out of different areas. Problem: If I do a basic search for a user_id, I get back multiple listings for that user with different timestamps for each badge use, great. I created a dashboard that allows me to : search all/specific user(s) by ID and allot what timeframe to search in. On the dashboard at the bottom for "user activity", it will only display1 event per user_id listed in that timeframe (or just 1 event for a specific user when searched), even if a user badged in multiple times.   Request: How can I make it so all instances of a badge being used are shown? I would prefer to be able to do a dropdown window type of setup when all events for a user show up under their ID, but I can live with all instances during that timeframe being shown in series.   -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Source: <form theme="dark" refresh="300"> <label>Employee Tracker</label> <search id="base_search"> <query>index="argus" argus_passage | table _time, user_id, building_name, portal,from_area, to_area</query> <earliest>$time_token.earliest$</earliest> <latest>$time_token.latest$</latest> </search> <fieldset submitButton="true" autoRun="true"> <input type="dropdown" token="user_id" searchWhenChanged="false"> <label>Search a UserID:</label> <choice value="*">All</choice> <search base="base_search"> <query>| stats count by user_id</query> </search> <fieldForLabel>user_id</fieldForLabel> <fieldForValue>user_id</fieldForValue> <default>*</default> <initialValue>*</initialValue> </input> <input type="time" token="time_token" searchWhenChanged="false"> <label>Choose a TimeFrame:</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <single> <title>Total Users for Desired TimeFrame</title> <search base="base_search"> <query>| dedup user_id | stats count</query> </search> <option name="colorMode">block</option> <option name="drilldown">none</option> <option name="height">251</option> <option name="rangeColors">["0x53a051","0x0877a6","0xf8be34","0xf1813f","0x006d9c"]</option> <option name="underLabel">Total Users</option> <option name="useColors">1</option> </single> </panel> </row> <row> <panel> <chart> <title>Portal Usage</title> <search base="base_search"> <query>| search user_id=$user_id$ | dedup user_id | stats count by portal</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <chart> <title>Ingress Portal</title> <search base="base_search"> <query>| search user_id=$user_id$ | dedup user_id | stats count by from_area</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> <panel> <chart> <title>Egress Portal</title> <search base="base_search"> <query>| search user_id=$user_id$ | dedup user_id | stats count by to_area</query> </search> <option name="charting.chart">pie</option> <option name="charting.drilldown">none</option> </chart> </panel> </row> <row> <panel> <table> <title>User Activity</title> <search base="base_search"> <query>| search user_id=$user_id$ | dedup user_id | sort -count</query> </search> <option name="count">50</option> <option name="drilldown">none</option> </table> </panel> </row> </form>       
We have data which is not being indexed that needs to be searched. I've been told by our Splunk admin team that the data is available to be searched. I've attempted to search for the Windows Event 46... See more...
We have data which is not being indexed that needs to be searched. I've been told by our Splunk admin team that the data is available to be searched. I've attempted to search for the Windows Event 4688: A new process has been created  in the "source" and "sourcetype" fields for testing and do not get any results returned.  To keep licensing costs down the admin team informed me that Event 4688 is not going to indexed along with a number of other event codes.  Any recommendations on performing a search on data that is not indexed would be appreciated. Not sure how to make a regex search for a specific event ID which was suggested. Below are the searches I tried that failed. sourcetype="wineventlog:security" EventCode=4688 --> No results found. Try Expanding the time range. source="wineventlog:security" EventCode=4688 --> No results found. Try Expanding the time range.  
Hi, I have a dashboard setup consisting of : 1 - timepicker 2 - chart disaplying count from a search 3 - a panel (table) disaplying search results How can I refresh the search results by... See more...
Hi, I have a dashboard setup consisting of : 1 - timepicker 2 - chart disaplying count from a search 3 - a panel (table) disaplying search results How can I refresh the search results by clicking on the chart? Say, I click on 3 in the chart and panel below the chart named Errors will display search results for that specific time period ( in this case the panel will populate 3 errors)? 
Hello Team, I have a query called: host="mule1" OR host="mule2" Message="message: Start of Flow CreateUser flow" OR Message="message: All system calls for CREATE user is completed" | stats count by... See more...
Hello Team, I have a query called: host="mule1" OR host="mule2" Message="message: Start of Flow CreateUser flow" OR Message="message: All system calls for CREATE user is completed" | stats count by Message Output: But here I want in the output the third row should be Failures under Message column and First column minus(-) Second column count in Third column count.
Please confirm/deny something for me because it's not clear from the docs. Let's assume I have events containing both "unstructured" data and json. Something similar to the ones from https://communi... See more...
Please confirm/deny something for me because it's not clear from the docs. Let's assume I have events containing both "unstructured" data and json. Something similar to the ones from https://community.splunk.com/t5/Getting-Data-In/JSON-transformations/m-p/370127#M67168 Dec 1 22:29:42 127.0.0.1 1 2017-12-01 LOGSERVER 1292 - - {"event_type":"type_here","ipv4":"127.0.0.1","hostname":"pc_name.local","occured":"01-Dec-2017 22:24:34"} If I set KV_MODE=json, I assume the fields from the json part should get parsed automaticaly. But what about the rest of the message? Can I still apply transforms to get additional fields parsed from the event?
Hi, I have configured Splunk with LDAP authentication and everything appears correct, the group and the users assigned within, as I see how it should be configured, the last step is to assign the rol... See more...
Hi, I have configured Splunk with LDAP authentication and everything appears correct, the group and the users assigned within, as I see how it should be configured, the last step is to assign the roles and perform the login and everything is done, in In my case this is not possible, I have all the available roles assigned to the group and I still can't log in with for example: jmcastillo, which is an LDAP user, could someone help me understand what is happening? What else do I need or what configuration is wrong? Greetings and thank you.
can any one help  to write a Query for below scenario?   Financial Year Revenue Comparison (Month / Year): Compare Revenue with current Financial Year of the month Vs Compare Revenue with c... See more...
can any one help  to write a Query for below scenario?   Financial Year Revenue Comparison (Month / Year): Compare Revenue with current Financial Year of the month Vs Compare Revenue with current-1 Financial Year of the month Vs Compare Revenue with current-2 Financial Year of the month Current Financial Year Revenue Comparison Vs Current-1 Financial Year Revenue Comparison Vs Current-2 Financial Year Revenue Comparison *Revenue – Sum of Total  amount X-axis = current-2,current-1,current Y-axis = amount
I have an index where one of the relevant fields is a domain. This index is used in a search in a dashboard, where I have a table showing relevant fields. Let's consider this:   index=foo contains ... See more...
I have an index where one of the relevant fields is a domain. This index is used in a search in a dashboard, where I have a table showing relevant fields. Let's consider this:   index=foo contains 5 logs: domain=community.example.com serial=123 desc=whatever1 domain=dev.example.com serial=456 desc=whatever2 domain=abc.dev.example.com serial=789 desc=whatever3 domain=def.dev.com serial=098 desc=whatever4 domain=blog.example.com serial=765 desc=whatever5   I have a whitelist csv inputlookup with the following: *.dev.example.com blog.example.com   I want the table to show a column "whitelist" with YES if the domain is in the whitelist. I'm using the query below to do that, but it does not work for wildcard domains. For example, all domains *.dev.example.com (in my example, abc.dev.example.com and def.dev.example.com) should have the column whitelist as YES, but they appear blank. On the other hand, blog.example.com appear with YES in the whitelist column, because it does not have any wildcard in the whitelist.     index=foo | join type=left domain [| inputlookup whitelist_foo.csv | eval whitelist="YES" | fields domain, serial, desc, whitelist] | table domain, serial, desc, whitelist     From what I understood, the join left interprets the asterisk as a string, not as a wildcard. How can I overcome that?
Hello Splunkers, I'm collecting Aruba AP (Aruba Access Point) logs from my rsyslog inputs. I use the Aruba_Networks add-on available on splunk.com to parse the logs. Actually the Splunk collect th... See more...
Hello Splunkers, I'm collecting Aruba AP (Aruba Access Point) logs from my rsyslog inputs. I use the Aruba_Networks add-on available on splunk.com to parse the logs. Actually the Splunk collect the "host" field as the IP address of the AP (host = ip). I created a lookup associating a hostname to the ip of the AP :   hostname,host hostname1,ip1 hostname2,ip2 hostname3,ip3 ...,... hostnameN,ipN   The issue I get is I want a "hostname" field and an "ip" field but I have neither of these. I tried to change my props.conf file :   [(?::){0}aruba*] LOOKUP-aruba_ap_list=aruba_ap_list host OUTPUTNEW hostname host AS ip   and in my transforms.conf (file exist in the lookup folder) :   [aruba_ap_list] filename=aruba_ap_list.csv    But the field "hostname" isn't created and the "host" field alias named "ip" isn't created neither. I don't know what I'm doing wrong. Can you help me please ?
Hi All, Can you please help me with Splunk to CEF field mapping for Cisco Open DNS.
Hi SMEs, Seeking help to capture below 2 strings (Only string1 & Only string1) as below in one regex ","category":"Only string1", ","category":"a1b2c3-Only string2",
Hi,  I would like to count how many times "Booking failed with 1 source conflict and 1 destination conflict" message occurs in the log.    index="xx" OR index=main host="xxx" "booking failed" sour... See more...
Hi,  I would like to count how many times "Booking failed with 1 source conflict and 1 destination conflict" message occurs in the log.    index="xx" OR index=main host="xxx" "booking failed" source="/opt/ipath/log/main.log" NOT update NOT Details | eval Reason = case( bookname="failed with 1 source conflict and 1 destination conflict\"", "Booking failed with 1 source conflict and 1 destination conflict" ) | stats count by Reason     old logline:   "2021-05-11 13:59:39,615 backend_7.20.47: INFO services/PathManagerService(backend): Booking failed with 1 source conflict and 1 destination conflict"     new logline: "2021-06-27 14:24:33,513 backend_8.20.26: INFO vip.service.PathManagerService Booking failed with 1 source conflict and 1 destination conflict [1930711-4]" After the system upgrade, I don't know how to ignore [1930711-4] part.
I am running following search query to obtain history of triggered alerts (time, name, severity), manually:   index=_audit action=alert_fired ss_app=* | eval ttl=expiration-now() | search ttl>0 | t... See more...
I am running following search query to obtain history of triggered alerts (time, name, severity), manually:   index=_audit action=alert_fired ss_app=* | eval ttl=expiration-now() | search ttl>0 | table trigger_time ss_name severity | rename trigger_time as "Alert Time" ss_name as "Alert Name" severity as "Severity"   I am observing following strange behavior: I am running this for "all time", yet I can never match timestamps for two searches. What I mean by this is that if I search for value of `trigger_time` for alert that triggered in May, and search for that very same value in all time export generated in June, I will not find it. It is not a single occurrence. Whichever record I randomly select, I am unable to find in other files exporting seemingly the same data. In case it is important: I have 3 SeachHeads in a cluster. Is this a bug? (v8.0.2.1) If this is not a bug, how do I make Splunk to provide precise time of when alerts were triggered?
This is my sample data. i need the total "passed"  These are the Headers, Node Name _time, Anti-Spoofing,  Rule Banner , Rule Http Rule Palo alto Username SSH Timeout Ssh Access Tacacs Telnet Rule c... See more...
This is my sample data. i need the total "passed"  These are the Headers, Node Name _time, Anti-Spoofing,  Rule Banner , Rule Http Rule Palo alto Username SSH Timeout Ssh Access Tacacs Telnet Rule console port config ntp server Result NDL-ALM-GSD-BUS-FW-01 2021-06-24 17:27:35 Passed Passed Passed Passed Passed Passed Passed Passed Passed Passed Passed USA-DNV-CUS-BUS-FW-02 2021-06-24 17:27:35 Passed Passed Passed Passed Passed Passed Passed Passed Passed Passed Passed
Hola, he conseguido enlazar Splunk con LDAP, el problema es que cuando salgo de Splunk he intentado acceder de nuevo pero con algunos de los nuevos usuarios introducidos por LDAP. Además, cuando con... See more...
Hola, he conseguido enlazar Splunk con LDAP, el problema es que cuando salgo de Splunk he intentado acceder de nuevo pero con algunos de los nuevos usuarios introducidos por LDAP. Además, cuando conseguí la configuración en la sección Asignación de grupos, los usuarios aparecen como si fueran grupos, ¿es así?
I have a complex Splunk Dashboard that needs to be updated through REST API. The Dashboard was created manually, so the Dashboard XML is very large with many graphs,charts and complex search queries... See more...
I have a complex Splunk Dashboard that needs to be updated through REST API. The Dashboard was created manually, so the Dashboard XML is very large with many graphs,charts and complex search queries. I am able to do a GET of the dashboard contents. But when I try to update dashboard via POST and pass entire XML as eai:data, I get  the error: Unparsable URI-encoded request data .   When I try to pass the xml as file (request.xml) for Curl  : curl -ku username:pwd -X POST -H "Content-Type:text/xml"  -d 'name=testdashboard' -d 'eai:data=@request.xml' https://my_splunk_url:8089/servicesNS/username/appspace/data/ui/views <?xml version="1.0" encoding="UTF-8"?> <response> <messages> <msg type="ERROR">Error parsing XML on line 1: Start tag expected, '&lt;' not found </msg> </messages> </response> In the request.xml have tried to replace <form> with &lt;form&gt;  still no luck. Can someone please advise how to update large complex XML through REST API?
Hi there, I want to append a null frame char (x00) to my raw logs intercepted by props stanza. How can I solve this? I tried to insert strings like: SEDCMD REGEX = (.*)  followed by FORMAT Any... See more...
Hi there, I want to append a null frame char (x00) to my raw logs intercepted by props stanza. How can I solve this? I tried to insert strings like: SEDCMD REGEX = (.*)  followed by FORMAT Any hints?