All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I've got a search head cluster with servers in two time zones.   Users are geolocated load balanced, but if one site's servers is unreachable, they are routed to the other site's servers.  I know us... See more...
I've got a search head cluster with servers in two time zones.   Users are geolocated load balanced, but if one site's servers is unreachable, they are routed to the other site's servers.  I know users can set their timezone in preferences; however, I'm looking for a solution that would allow me to update their default timezone information in their user account through a query/rest api.
I am trying to set up an alert in Splunk that will email a user whenever their Windows session is X days old. It would be across multiple hosts/users and use the security event log to determine if th... See more...
I am trying to set up an alert in Splunk that will email a user whenever their Windows session is X days old. It would be across multiple hosts/users and use the security event log to determine if there hasn't been a 4647 or 1074 event since their 4624 logon event.  Has anyone set up something similar? Thanks in advance!
index="fw" app="ping" | bin _time span=10m | stats count by client_ip,dest_ip | stats list(dest_ip) AS dest_ip , list(count) AS count by client_ip | table client_ip , dest_ip , count I'd l... See more...
index="fw" app="ping" | bin _time span=10m | stats count by client_ip,dest_ip | stats list(dest_ip) AS dest_ip , list(count) AS count by client_ip | table client_ip , dest_ip , count I'd like to check the origin IP and destination IP that I pinged 10 times in 10 minutes. However, this command calculates the number of times for all times. Give me a hand with this.
Hello sir, i just installed the add on "Alien vault check OTX" in my splunk enterprise. i have integrated my api key, but when i use the command | checkotx <ip-add> it shows nothing there  is ther... See more...
Hello sir, i just installed the add on "Alien vault check OTX" in my splunk enterprise. i have integrated my api key, but when i use the command | checkotx <ip-add> it shows nothing there  is there any configuration missing, can you provide me documentation?? @larmesto kindly please update me.
I have time field which have values such as 9AM-10PM, 10:00AM-11:00PM, I want to change 9AM-10PM to 9:00AM-10:00 PM, to normalize field in sameformat. I tired strftime(strptime(time_field,"%H%p-%H... See more...
I have time field which have values such as 9AM-10PM, 10:00AM-11:00PM, I want to change 9AM-10PM to 9:00AM-10:00 PM, to normalize field in sameformat. I tired strftime(strptime(time_field,"%H%p-%H%p"),"%H:%M%p-%H%:%M%p")  But its not working, I also tried convert() and fieldformat but no luck. Any idea how can I achieve this ?
I want to make a presentation in a dashboard where I can see a line per service with the duration of each call of that service.  I have made a table in splunk and would create a linechart wiht multi... See more...
I want to make a presentation in a dashboard where I can see a line per service with the duration of each call of that service.  I have made a table in splunk and would create a linechart wiht multiple lines (per service a line) with duration in y-as en time in x-as. How can i do that? Or is there another way to get that done? my search: index=test sourcetype=test-performance duration>0 | convert timeformat="%Y-%m-%dT%H:%M:%S%:z" ctime(_time) AS date | table date, metric, duration | sort by date some events from the table.  date                                                     metric        duration 2021-08-25T08:55:28+02:00 service1    93 2021-08-25T08:55:28+02:00 service1    4 2021-08-25T08:55:28+02:00 service3    3 2021-08-25T08:55:28+02:00 service4    1 2021-08-25T08:55:23+02:00 service5    84 2021-08-25T08:55:20+02:00 service5    88 2021-08-25T08:50:55+02:00 service1    91 2021-08-25T08:50:55+02:00 service1   18 2021-08-25T08:50:55+02:00 service3   14 2021-08-25T08:50:55+02:00 service6   2 2021-08-25T08:50:55+02:00 service7   4 2021-08-25T08:50:55+02:00 service4   5 2021-08-25T08:50:54+02:00 service8   46 2021-08-25T08:50:54+02:00 service9   43 2021-08-25T08:49:58+02:00 service1   88 2021-08-25T08:49:58+02:00 service1   17 2021-08-25T08:49:58+02:00 service3   16 2021-08-25T08:49:58+02:00 service10 10 2021-08-25T08:49:58+02:00 service11 10 2021-08-25T08:49:58+02:00 service6    2
I am trying to install exchange server addon and facing below error.  "There was an error processing the upload. Invalid app contents: archive contains more than one immediate subdirectory: and TA-W... See more...
I am trying to install exchange server addon and facing below error.  "There was an error processing the upload. Invalid app contents: archive contains more than one immediate subdirectory: and TA-Windows-Exchange-IIS" Kindly help me out
Hi, We have a multisite cluster with 1 indexer on each site with 1 SH on primary site. Currently, when search affinity is enabled and we run a search for index "crowdstrike" , we can see past 30 day... See more...
Hi, We have a multisite cluster with 1 indexer on each site with 1 SH on primary site. Currently, when search affinity is enabled and we run a search for index "crowdstrike" , we can see past 30 days data. But when search affinity is disabled on the search head, the same search displays recent data and not the past 30 days. Question: Is there something missing configuration wise?
I would like to know the ip that made status=404 more than 10 times in 10 minutes in a week. Please help me. field list ip = src_ip status = status
I have a scheduled alert running every 15 minutes in the cron schedule. I set trigger action as Email, ServiceNow ticket & MS Teams notification. Here 80% of the alerts I am receiving successfully.... See more...
I have a scheduled alert running every 15 minutes in the cron schedule. I set trigger action as Email, ServiceNow ticket & MS Teams notification. Here 80% of the alerts I am receiving successfully. But i am failing to receive the remaining 20% alerts in Email, ServiceNow tickets & MS Teams. But when I am running the search I can able to find the result but I didn't receive the same alerts. When I search scheduler logs  I didn't find any failure logs. Please help here.
when I use SVG in Splunk Dashboard app,  it show bellow error, I want to know why we get this error, how can we fixed it.   Splunk version : 8.0.5 Splunk Dassboard App version: 0.8.0 sour... See more...
when I use SVG in Splunk Dashboard app,  it show bellow error, I want to know why we get this error, how can we fixed it.   Splunk version : 8.0.5 Splunk Dassboard App version: 0.8.0 sourcecode: {     "visualizations": {         "viz_7Gcj22nE": {             "type": "viz.choropleth.svg",             "options": {                 "backgroundColor": "transparent",                 "svg": "<svg width=\"322\" height=\"32\" viewBox=\"0 0 322 32\" fill=\"none\" xmlns=\"http://www.w3.org/2000/svg\">\n<rect id=\"level1\"  y=\"12\" width=\"33\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level2\"  x=\"37\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level3\"  x=\"73\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level4\"  x=\"109\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level5\"  x=\"145\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level6\"  x=\"181\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level7\"  x=\"217\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level8\"  x=\"253\" y=\"12\" width=\"32\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n<rect id=\"level9\"  x=\"289\" y=\"12\" width=\"33\" height=\"8\" rx=\"4\" fill=\"#483F9B\"/>\n</svg>"             },             "encoding": {                 "featureId": "primary[0]",                 "value": "primary[1]",                 "fill": {                     "field": "primary[1]",                     "format": {                         "type": "rangevalue",                         "ranges": [                             {                                 "from": 1500,                                 "value": "#483F9B"                             },                             {                                 "from": 500,                                 "to": 1500,                                 "value": "#A870EF"                             },                             {                                 "to": 500,                                 "value": "#483F9B"                             }                         ]                     }                 }             },             "dataSources": {                 "primary": "ds_xtEHKvxm"             }         }     },     "dataSources": {         "ds_xtEHKvxm": {             "type": "ds.search",             "options": {                 "query": "| makeresults \n| eval Progress=\"level1\"| eval count=1000\n| append [| makeresults | eval Progress=\"level2\"| eval count=1000]\n| append [| makeresults | eval Progress=\"level3\"| eval count=10]\n| append [| makeresults | eval Progress=\"level4\"| eval count=10]\n| append [| makeresults | eval Progress=\"level5\"| eval count=10]\n| append [| makeresults | eval Progress=\"level6\"| eval count=10]\n| append [| makeresults | eval Progress=\"level7\"| eval count=10]\n| append [| makeresults | eval Progress=\"level8\"| eval count=10]\n| append [| makeresults | eval Progress=\"level9\"| eval count=10]\n|table Progress,count"             },             "name": "Search_1"         }     },     "defaults": {         "dataSources": {             "ds.search": {                 "options": {                     "queryParameters": {                         "latest": "$global_time.latest$",                         "earliest": "$global_time.earliest$"                     }                 }             }         },         "visualizations": {             "global": {                 "showLastUpdated": true             }         }     },     "inputs": {         "input_global_trp": {             "type": "input.timerange",             "options": {                 "token": "global_time",                 "defaultValue": "-24h@h,now"             },             "title": "Global Time Range"         }     },     "layout": {         "type": "absolute",         "options": {},         "structure": [             {                 "item": "viz_7Gcj22nE",                 "type": "block",                 "position": {                     "x": 0,                     "y": 0,                     "w": 380,                     "h": 300                 }             }         ],         "globalInputs": [             "input_global_trp"         ]     },     "description": "",     "title": "test" }
hello,   I have alert transaction at "ACK" and at "Resolved", i have created table for each value, but unable to edit time format of each. Please help. Please find attached image for reference. Cu... See more...
hello,   I have alert transaction at "ACK" and at "Resolved", i have created table for each value, but unable to edit time format of each. Please help. Please find attached image for reference. Current Output-  857415 piyush.moorjani piyush.moorjani 2021-08-25T01:57:26Z 2021-08-25T01:58:47Z ACKED RESOLVED   need time format of third col.
Hello, I'm trying to configure the CloudTrail and CloudWatch data inputs to collect AWS logs for Splunk. When I select a region that I think is correct, there is no log data coming into Splunk. W... See more...
Hello, I'm trying to configure the CloudTrail and CloudWatch data inputs to collect AWS logs for Splunk. When I select a region that I think is correct, there is no log data coming into Splunk. When I go into the inputs.conf file manually and input the region that was assigned to my programs account, still, no log data.  I even went in configured an index for the AWS add-on, went into the meta-data and changed the saved searches/macros to point to the new index I created, etc.  Has anyone experienced this issue before?
Hi  need to calculate the average based on a condition.  testing=true vs testing=false  (lets say field A) field B has the values to average (milliseconds) field C has urls  something like thi... See more...
Hi  need to calculate the average based on a condition.  testing=true vs testing=false  (lets say field A) field B has the values to average (milliseconds) field C has urls  something like this: | stats avg(fieldB, when field A testing true) as trueV avg(fieldB, when field A testing false) as falseV by Field C Goal is a table like this: url  | average (true) | average( false).      
Hi All, we have lots of Alerts and reports configured in splunk which is in disabled state.. How can we find their list in excel sheet.. also how can we find all list of dashboards in splunk in one... See more...
Hi All, we have lots of Alerts and reports configured in splunk which is in disabled state.. How can we find their list in excel sheet.. also how can we find all list of dashboards in splunk in one excel sheet  to review it so that we can delete the unwanted one ..   PLease Note:- its not about orphaned search .. that i got them  from search app dashboard .. thanks..
I am running Splunk Enterprise on prem and have a set of indexers in a cluster in one region and another set of indexers in a separate cluster in a different region.   If region A is completely lost... See more...
I am running Splunk Enterprise on prem and have a set of indexers in a cluster in one region and another set of indexers in a separate cluster in a different region.   If region A is completely lost but we have backups in Region B of the data from Region A; is it possible to restore the data into the indexer cluster in Region B or would we have to restore the data and put into thawed and run the unthaw process bucket by bucket? We are not running a multi-site cluster. This is for a DR procedure but at the same time would be nice to know best way to do this as we have a 3rd cluster setup that eventually we will want the data in moved to one of the other clusters to allow for decommission of the 3rd clustered location. (The same indexes exists in all 3 separated clustered environments.)   Thanks.
I am working on upgrading a deployment server which is typically an easy task. My issue is that this particular environment has a strange path instead of the usual /opt/splunk/etc. This environment h... See more...
I am working on upgrading a deployment server which is typically an easy task. My issue is that this particular environment has a strange path instead of the usual /opt/splunk/etc. This environment has /opt/Splunk/splunkenterprise/etc/. I feel like if i run the upgrade as i normally do untarring the file to /opt, it could create some issues. does anyone have insight as to if this should be upgraded "business as usual" or if the command needs modification? I am worried that untarring in under /opt will cause issues with other possible dependencies to particular file paths. 
Here is a simple set of records. to demonstrate the data (but not the two sourcetypes). The query would be more like index=myindex ( sourcetype=A OR sourcetype=B) . Lets say RequiredOnHand is sourc... See more...
Here is a simple set of records. to demonstrate the data (but not the two sourcetypes). The query would be more like index=myindex ( sourcetype=A OR sourcetype=B) . Lets say RequiredOnHand is sourcetype=B and the other Containers are in sourcetype=A. I would like to create the following list: 1. list Contains from sourcetype=A that match (or missing) from sourcetype=B Contains Basket Bunch Pint RequiredOnHand Apples 0 0 0 Bananas 0 0 Grapes 0 Oranges 0 0 Strawberries 0 0 2. list any Contains values in sourcetype=A that are not in sourcetype=B Contains Basket Balls 1 3. list any Contains values in sourcetype=B missing from sourcetype=A Contains Basket Kiwi 1 | makeresults | eval Container="Basket" | eval Contains="Apples" | eval From="FieldA" | append [|makeresults| eval Container="Basket"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Grapes" | eval From="FieldD"] | append [|makeresults| eval Container="Pint" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="Pint" | eval Contains="Grapes" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Apples" | eval From="FieldA" ] | append [|makeresults| eval Container="RequiredOnHand"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Kiwi" | eval From="FieldD" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Grapes" ] | append [| makeresults | eval Container="Basket" | eval Contains="Balls"| eval From="FieldA"] | chart count(Container) as chart-count over Contains by Container Results: Contains Basket Bunch Pint RequiredOnHand Apples 1 0 0 1 Balls 1 0 0 0 Bananas 0 1 0 1 Grapes 0 1 1 1 Kiwi 0 0 0 1 Oranges 1 0 0 1 Strawberries 0 0 1 1 Thanks for the help Oranges   0 0   Strawberries 0 0     2. list any Contains values in sourcetype=A that are not in sourcetype=B Contains Basket Balls  1 3. list any Contains values in sourcetype=B missing from sourcetype=A Contains Basket Kiwi  1 | makeresults | eval Container="Basket" | eval Contains="Apples" | eval From="FieldA" | append [|makeresults| eval Container="Basket"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="Bunch" | eval Contains="Grapes" | eval From="FieldD"] | append [|makeresults| eval Container="Pint" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="Pint" | eval Contains="Grapes" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Apples" | eval From="FieldA" ] | append [|makeresults| eval Container="RequiredOnHand"| eval Contains="Oranges" | eval From="FieldB" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Bananas" | eval From="FieldC" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Strawberries" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Kiwi" | eval From="FieldD" ] | append [|makeresults| eval Container="RequiredOnHand" | eval Contains="Grapes" ] | append [| makeresults | eval Container="Basket" | eval Contains="Balls"| eval From="FieldA"] | chart count(Container) as chart-count over Contains by Container Results: Contains Basket Bunch Pint RequiredOnHand Apples 1 0 0 1 Balls 1 0 0 0 Bananas 0 1 0 1 Grapes 0 1 1 1 Kiwi 0 0 0 1 Oranges 1 0 0 1 Strawberries 0 0 1 1 Thanks for the help
Greetings to all, I'm having an issue with the Microsoft Teams TA.  After setting up the Subscription, I'm getting this error: message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Team... See more...
Greetings to all, I'm having an issue with the Microsoft Teams TA.  After setting up the Subscription, I'm getting this error: message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" subscription = create_subscription(helper, access_token, webhook_url, graph_base_url) ERROR ExecProcessor - message from "/opt/splunk/bin/python3.7 /opt/splunk/etc/apps/TA_MS_Teams/bin/teams_subscription.py" ERROR400 Client Error: Bad Request for url: https://graph.microsoft.com/v1.0/subscriptions Thanks. Jeff
Is there any way to find out the port on which Splunk is set up programmatically using Splunk Java SDK? Basically to read the entry from $SPLUNK_HOME/etc/system/local/web.conf?