All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

HI  By mistake  bin folder was deleted on one of the indexer. Is there any way to get it back. We dont have backup for that indexer. Please suggest what need to do 
Hi, In my logs I have the field name action. This field can have several values: allow, detect, block and etc. Since I would like my data will be presented in Enerprise Security dashbords as expec... See more...
Hi, In my logs I have the field name action. This field can have several values: allow, detect, block and etc. Since I would like my data will be presented in Enerprise Security dashbords as expected, I need to map the value to the allowed value based on the specific data model. For example:   Email Data Model Allowed Values   Intrusion Detection Data Model Allowed Values   Action delivered, blocked, quarantined, deleted allowed, blocked   meaning that when I extract the data in my app, I need to map my action value (for example: allow) to delivered / allowed based on the relevant data model. How can I do that using my app configuration files?
I have a standalone ReactJS web application and Splunk Enterprise free version both running on localhost of the same laptop. The react app will never be run in Splunk Enterprise as a dashboard. I ha... See more...
I have a standalone ReactJS web application and Splunk Enterprise free version both running on localhost of the same laptop. The react app will never be run in Splunk Enterprise as a dashboard. I have the conf19 dashboard Buttercup Games App demo running in my React app. That example uses test data. I now want to set the datasource to ds.search, query my local Splunk and visualize the results. I'm only changing the definition file. This is an excerpt: dataSources: { total_count_search: { type: 'ds.search', options: { query: 'index=_internal | stats count' } }}, visualizations: { sv_total_event: { title: '_internal event count', type: 'viz.singlevalue', options: { backgroundColor: '#53a051' }, dataSources: { primary: 'total_count_search' } }} I'm getting a cors error: Request header field x-requested-with is not allowed by Access-Control-Allow-Headers in preflight response. The web, input and settings conf files all have crossOriginSharingPolicy set to my react app url. I'm not sure how to get past this. I can create searches and get results by using the REST API. Are there any examples/documentation to search Splunk from a standalone React app using the new Splunk dashboard or to use the REST API to create a visualization with the new dashboard?  
Hi, I am facing an error while trying to establish a new Oracle connection in Splunk DB connect app. Added identities without any error. Below are the details.  Error - There was an error process... See more...
Hi, I am facing an error while trying to establish a new Oracle connection in Splunk DB connect app. Added identities without any error. Below are the details.  Error - There was an error processing your request. It has been logged (ID .....). Have configured java- /java/jdk1.8.0_251-amd64/ Installed Drivers - Oracle 12.1 Splunk Version -  8.0.4.1 DB Connect Version - 3.3.1  & Build:5 dbx_settings.conf  => [java] javaHome = /usr/java/jdk1.8.0_251-amd64/ identities.conf => [Oracle_iden] disabled = 0 password = U2FsdGVkX1+cAT2Scp6riQGah0uCf5sGgSaFYe4qZ2U= use_win_auth = 0 username = admin inputs.conf => [http://db-connect-http-input] disabled = 0 token = DF5BE1B9-0C3A-47F7-8B69-5815C91222B4 ui-metrics-collector.conf => [collector] app_id = e3743b4e-9f25-4ccb-ac66-122fa9e48149 mode = on db_connections.conf => [Oracle_Conn] connection_type = oracle database = test disabled = 0 host = <hostname> identity = Oracle_iden jdbcUseSSL = false localTimezoneConversionEnabled = false port = 1521 readonly = false timezone = Australia/Sydney   Can someone please help me to fix this issue.   
Hello, I have enabled the AppD agent on AKS(Azure) but I am not able to see data in Quotas. Can anyone suggest if I am missing anything here?   ^ Edited by @Ryan.Paredez to blur out the co... See more...
Hello, I have enabled the AppD agent on AKS(Azure) but I am not able to see data in Quotas. Can anyone suggest if I am missing anything here?   ^ Edited by @Ryan.Paredez to blur out the controller URL. Please do not share Controller URLs in community posts for security and privacy reasons.
Hi, I'm able to receive and extract firewall traffic data using the log exporter function.  But system messages (e.g. kernel warnings) are still using "normal" syslog. There is no field extraction f... See more...
Hi, I'm able to receive and extract firewall traffic data using the log exporter function.  But system messages (e.g. kernel warnings) are still using "normal" syslog. There is no field extraction for these logs. Did I miss something?   Best, Sebastian
Hi All, need help in 2 regex problem. 1. Filtering Class_Type value from the  _raw .   "Ticket_ID": "8158", Please see Work Detail for all Alerts associated with this Incident ID\n---------------... See more...
Hi All, need help in 2 regex problem. 1. Filtering Class_Type value from the  _raw .   "Ticket_ID": "8158", Please see Work Detail for all Alerts associated with this Incident ID\n-----------------------------------------------------------------\nTHPSL- : Node is down.\nClass: NodeDown Trap\nHost: THPSL-\nAlertID: 46141249\nSource Tool: OpenNMS\n-------------------------\nThe Moogsoft situation id = 999790\n\nSituationID from Moogsoft = https://moogui.na.xom.com/#/situation/999790     I use below regex which worked in https://regex101.com/  , but am not getting the exact output in splunk query | rex field=_raw "Class:\s(?<Class_Type>[^\\]+)"   2. _raw having 2 Class_Type field.   ""Ticket_ID"": ""1395"", Please see Work Detail for all Alerts associated with this Incident ID\n-----------------------------------------------------------------\nPGPLNGH1- : Node is down.\nClass: NodeDown Trap\nHost: PGPLNGH1-\nAlertID: 45744967\nSource Tool: OpenNMS\n-------------------------\nPlease see Work Detail for all Alerts associated with this Incident ID\n-----------------------------------------------------------------\nPGPLNGH4-: Operational status Down on interface ifname:Gi0/28 ifindex:10128 ifdescr:GigabitEthernet0/28 ifalias:iLi to PGPLNGH1-:Gi0/28\nClass: Custom Trap\nHost: PGPLNGH4-\nAlertID: 45748120\nSource Tool: OpenNMS\n-------------------------\nThe Moogsoft situation id = 973750\n\nSituationID from Moogsoft = https://moogui.na.xom.com/#/situation/973750"     how can i get both Class_Type field in Splunk output. Sample output. Class_Type Custom Trap NodeDown Trap
Hi, Using the api I am submitting searches to splunk. Sometimes, the jobs remain in queued state forever. I can see when I query for a queued job that it is saying 'this search could not be dispatch... See more...
Hi, Using the api I am submitting searches to splunk. Sometimes, the jobs remain in queued state forever. I can see when I query for a queued job that it is saying 'this search could not be dispatched because of role-based concurrency ' - which would be fine - if I could see the other jobs for that user.  Using the /search/jobs api I can see other jobs and I can filter the results for those created by my user. When that search shows me there are other jobs, I have deleted them, so that api returns no jobs for my user - which I would expect should mean I can submit new jobs - but I still see jobs getting stuck in the queued state. It feels like my search for existing jobs is not returning the full list for some reason, but I don't know what that would be. Any help would be very welcome. this is on splunk enterprise 8.0.4 Thanks.
Hi! I'm trying to set up a dashboard for users to be able to see how much raw data size they used over time and have users be able to select multiple indexes. (Note here: I do have most of my indexes... See more...
Hi! I'm trying to set up a dashboard for users to be able to see how much raw data size they used over time and have users be able to select multiple indexes. (Note here: I do have most of my indexes sending this data daily to a Summary Index. I'm still working to clean up indexes, so this is a more real time option). I'm trying to figure out what I may be doing wrong in this method? I get no results, when I feel I should. I've looked and looked and can't find a solution.       | gentimes start=-1 | eval multi_index="activate_web main" | makemv multi_index delim=" " | mvexpand multi_index | search index=multi_index | eval raw_len=len(_raw) | stats sum(raw_len) AS event_size by index | eval "Size in GB"=event_size/1024/1024/1024 | sort - event_size | table index "Size in GB"       I'm trying to get the "multi-index" to do something like (Index=main OR index=activate_web) I did find this, which got me closer, but I'm not sure here what I'm missing: https://community.splunk.com/t5/Getting-Data-In/Form-with-a-multi-line-text-box-that-will-OR-every-line-it-is/td-p/21800 Thanks! Stephen
Hello I have a log  as shown below FeatureDetails [tokenValidatorInfo=false, requestValidationRequired=false, requestPayloadValidationRequired=false, responsePayloadValidationRequired=false, ... See more...
Hello I have a log  as shown below FeatureDetails [tokenValidatorInfo=false, requestValidationRequired=false, requestPayloadValidationRequired=false, responsePayloadValidationRequired=false, aopUsed=false, tibcoCommunicatorUsed=false, secretsSecured=false] i want to show my result like below tokenValidatorInfo=false requestValidationRequired=false requestPayloadValidationRequired=false responsePayloadValidationRequired=false aopUsed=false tibcoCommunicatorUsed=false secretsSecured=false
Is it possible to upgrade higher Splunk Enterprise version on existing servers(Indexer & Forwarder) or we need to use new servers?
Hi,  I am using the UF to collect data from the system. Using the following stanza I seem to receive all the information in regards to the bytes sent and received. That is too much information for m... See more...
Hi,  I am using the UF to collect data from the system. Using the following stanza I seem to receive all the information in regards to the bytes sent and received. That is too much information for me. I am interested in traffic generated by a specific process, or processes. To be able to do this I have currently the following stanza live but it seems to be still sending everything. Not using the whitelist option. I also don't  see the option in the documentation so that would not surprise me.  [perfmon://Network Adapter WebEx] counters = Bytes Received/sec;Bytes Sent/sec instances = * whitelist = *.webex.com interval = 60 mode = single object = Network Interface index = xxxyyyzzz useEnglishOnly = true sourcetype = xxxyyyzzz:Network Adapter disabled = 0  What would be the best way, if even possible, to only catch and the network traffic for a specific process or processes?  Besides traffic I am also interested in other metrics such as errors, dropped packets etc. Maybe I am going about this the wrong way. Any help would be appreciated. 
Hello all, I'm having issues achieving to extract fields from a sample in Splunk. I went to "extract fields", I have the first one, but I don't know how to continue. Here the sample:   [{"Type":... See more...
Hello all, I'm having issues achieving to extract fields from a sample in Splunk. I went to "extract fields", I have the first one, but I don't know how to continue. Here the sample:   [{"Type":"Attention","ABUSE":18,"GSD 24x7":1,"CLOUD":0,"DC":0,"ECL":0,"ITMS":0,"NET":0,"RFO":17,"Total":36},{"Type":"Active","ABUSE":0,"GSD 24x7":22,"CLOUD":38,"DC":5,"ECL":1,"ITMS":0,"NET":12,"RFO":2,"Total":80},{"Type":"Total","ABUSE":18,"GSD 24x7":23,"CLOUD":38,"DC":5,"ECL":1,"ITMS":0,"NET":12,"RFO":19,"Total":116},{"Type":"P1","ABUSE":0,"GSD 24x7":0,"CLOUD":0,"DC":0,"ECL":0,"ITMS":0,"NET":0,"RFO":6,"Total":6},{"Type":"P2","ABUSE":0,"GSD 24x7":1,"CLOUD":0,"DC":0,"ECL":0,"ITMS":0,"NET":0,"RFO":10,"Total":11},{"Type":"P3\/4","ABUSE":18,"GSD 24x7":0,"CLOUD":0,"DC":0,"ECL":0,"ITMS":0,"NET":0,"RFO":1,"Total":19}]     From that, I would like to be able to calculate averages and sums up from the number, having two fields: - Team. Values: ABUSE, CLOUD, GSD 24x7, NET, RFO... - Type: Attention, Active...   with this in the search   | rex max_match=0 "(?<Type>((\.*:\")\w+))"|   I got the Type, but no idea on how to proceed.   Any ideas? Thank you all in advance.
Hi,   I am trying to search through some patch data to find percentage of devices that have been patched against the total amount of machines. The search I have is below: index="automox" sourcety... See more...
Hi,   I am trying to search through some patch data to find percentage of devices that have been patched against the total amount of machines. The search I have is below: index="automox" sourcetype="automox:devices" | dedup name | eval patch_pend=if(pending_patches>0, 1, 0) | eval patched=if(pending_patches=0, 1, 0) | stats sum(patch_pend) , count(name) AS total, sum(patched)   I first have to run an eval to find the amount of machines with pending patches more than 0, this gives me the filed patch_pend and then eval to get patched machines. Then I used stats to get the total amount within patch_pend and also count names which is total amount of machines and finally get the total patched machines. Then my thought was to do another eval similar to below: | eval perc=round(patch_pend*100/total,2) But what this gives me is just one full pie chart with the total at 100%. So I have these three numbers: Total amount of machines machines with patches pending Machines with no patches pending What I want to show is the percentage of machines that have 0 patches. Can anyone point me in the right direction to do this?
I have searched this but I have not found a suitable answer yet, Here I have a field as below time "0" "7" "56" "101" "3045" "7034" These show a time stamp, 0 is 0 second, 56 is 56 seconds, ... See more...
I have searched this but I have not found a suitable answer yet, Here I have a field as below time "0" "7" "56" "101" "3045" "7034" These show a time stamp, 0 is 0 second, 56 is 56 seconds, 101 is 1m1s, 3045 is 30m45s. I would like to transform these to mm:ss format,  so "0" would be "00:00", "101" would be "01:01".  How I can do this? 
I am using below query to fetch Incident from the subject line:— rex field=subject max_match=0 “(?<Incident>INC\d+)” however, for below subject line i am unable to fetch incident:— [SecMail:] INC0... See more...
I am using below query to fetch Incident from the subject line:— rex field=subject max_match=0 “(?<Incident>INC\d+)” however, for below subject line i am unable to fetch incident:— [SecMail:] INC000027755501|TAS00003760220 wrdna904xusa73|server is unreachable | INC000027790458| INC000027882562
Hi, My issue is :  I have a query which contains a "NetworkIterface" field: eni-12345, eni-6789, ... I have a lookup which contains the list of network interfaces whose ip are public: I want to b... See more...
Hi, My issue is :  I have a query which contains a "NetworkIterface" field: eni-12345, eni-6789, ... I have a lookup which contains the list of network interfaces whose ip are public: I want to be able to create a request which, by combining the search and the lookup, only shows the network interface whose ip are public only. I have a lookup like that : ENI Public IP eni-1234 192.10.10.10 eni-5678 192.10.10.11 eni-9012 192.10.10.12   My search is basic :  Index=abc sourcetype=xyz NetworkInterface=*  Thanks for your help !
Hello, Is it possible to populate drop down in Dashboard with eval values. I have a query as given below which returns me a string. index=test sourcetype="testabc" | rename sre_job_id as JOB_ID |... See more...
Hello, Is it possible to populate drop down in Dashboard with eval values. I have a query as given below which returns me a string. index=test sourcetype="testabc" | rename sre_job_id as JOB_ID | stats earliest(_time) AS Earliest by JOB_ID | eval FirstEvent=strftime(Earliest,"%b %d, %Y %H:%M:%S") | eval JOB_ID_STR=tostring(JOB_ID) | eval JOB-ID-WITH-TIME=JOB_ID + "-" + FirstEvent | table JOB-ID-WITH-TIME | dedup JOB-ID-WITH-TIME | sort JOB-ID-WITH-TIME When I run this search with time as "Last 7 days", I get 3 records back. 7220-Aug 13, 2020 11:22:00 7320-Aug 13, 2020 11:46:32 7800-Aug 14, 2020 04:50:06   But when I use the same query in my drop-down in dashboard, I do not see any data. Below is the xml for the same: <input type="dropdown" token="jobIDII" searchWhenChanged="true"> <label>JOB ID II</label> <fieldForLabel>sre_job_id</fieldForLabel> <fieldForValue>sre_job_id</fieldForValue> <search> <query>index=test sourcetype="testabc" | rename sre_job_id as JOB_ID | stats earliest(_time) AS Earliest by JOB_ID | eval FirstEvent=strftime(Earliest,"%b %d, %Y %H:%M:%S") | eval JOB_ID_STR=tostring(JOB_ID) | eval JOB-ID-WITH-TIME=JOB_ID + "-" + FirstEvent | table JOB-ID-WITH-TIME | dedup JOB-ID-WITH-TIME | sort JOB-ID-WITH-TIME</query> <earliest>$timeToken.earliest$</earliest> <latest>$timeToken.latest$</latest> </search> </input> NOTE: timeToken is the token for DURATION. I also have a time input to filter the data for the above query in my dashboard. Please see the picture below: screenshot If you see above, JOB ID II is not populating any data and I cannot select anything from here. Thanks for your time in advance.  
When running the Universal Forwarder on a physical Windows server with multiple CPUs, it looks like the agent selects a random CPU that the Universal Forwarder collects performance metrics from. Some... See more...
When running the Universal Forwarder on a physical Windows server with multiple CPUs, it looks like the agent selects a random CPU that the Universal Forwarder collects performance metrics from. Sometimes it collects data from CPU #1, sometimes from CPU #2. If I have two CPUs and a client process that is also not NUMA aware that uses all of the CPU usage on ONE of the CPUs, the Universal Forwarder might be "connected" to the other CPU and only collecting CPU metrics from a potentially idle CPU. Is this a problem others observe or is it just me?
Goal: To get a table summing the amount of data transferred between specified time ranges based on a transaction. Sample Data: 12:00 01/01/2020 Task 1020 Started 13:00 01/01/2020 Task 1020... See more...
Goal: To get a table summing the amount of data transferred between specified time ranges based on a transaction. Sample Data: 12:00 01/01/2020 Task 1020 Started 13:00 01/01/2020 Task 1020 Finished 14:00 02/01/2020 Task 3020 Started 15:00 02/01/2020 Task 3020 Finished   12:00 01/01/2020 Data Sent 50 12:01 01/01/2020 Data Sent 50 12:02 01/01/2020 Data Sent 50 14:10 02/01/2020 Data Sent 50 14:11 02/01/2020 Data Sent 50 14:12 02/01/2020 Data Sent 50   Desired Outcome: Task 1020 Start 12:00 01/01/2020 Finish 13:00 01/01/2020 Data Sent 150 Task 3020 Start 14:00 02/01/2020 Finish 15:00 02/01/2020 Data Sent 150   Approach: This question requires two searches to be carried out. One to find the task start and finish times and a second to find and sum the data that was sent between those times. I am really struggling with the nested search aspect of this. I can get a transaction search to produce the start and finish times quite easily, but I don't know how to feed those results in to a second search to calculate the amount of data sent. Best Search index=main "Started" OR "Finished" | rex "Task\s(?<task_id>[0-9]*)" | transaction task_id startswith="Started" endswith="Finished" | eval latest = _time + duration | eval earliest = _time | fields earliest latest | format "multisearch " "[ search source=stream:netflow" "" "| stats sum(DataSent) as bytes_transferred ]" "" ""   This produces a bunch of subsearches that run a single search for each task time range My problem is that I don't know to pass the identifying "task_id" value into the subsearches. My initial thought was to use something like "eval task_id=task_id" but because of the limits on the format command, I can't specify which fields to appear where. I would appreciate anyone who could help with any ideas on how to approach this problem.