All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a situation when I need to dump a remote Security log with wevtutil and subseqently upload it into Splunk to cross-correlate it with XmlWinEventlog sourcetype logs.   I hoped that the XML str... See more...
I have a situation when I need to dump a remote Security log with wevtutil and subseqently upload it into Splunk to cross-correlate it with XmlWinEventlog sourcetype logs.   I hoped that the XML structure of wevtutil XML file is the same as the structure of the file received by Splunk from Universal Forwarders.  It looks like it is not the case.   I tried to upload the XML file into Splunk, but for some reason Splunk converts it into a bunch of unicode characters rather than recognizing it as XML file.  Selecting XmlWinEventlog sourcetype did not help either.   I wonder if anyone managed to load XML file created by wevtutil utility into Splunk with proper field extraction.   Thank you.
Hello Support team, The develop temporal license has expired recently, but when I've tried to reinstall the new license that it has been sent by email, i've got the next error message Bad Request —... See more...
Hello Support team, The develop temporal license has expired recently, but when I've tried to reinstall the new license that it has been sent by email, i've got the next error message Bad Request — web_1604432005.144831.lic: failed to add because: cannot add lic w/ subgroupId=DevTest:aortega@accedian.com to stack w/ subgroupId=Production   Could you helpme 
Hello, I have Splunk Enterprise v8.1 in distributed cluster with 1 SH, 1 master, 2 indexers and 2 heavy forwarders.   I have Cisco security suite installed on the HF and the data visualization i... See more...
Hello, I have Splunk Enterprise v8.1 in distributed cluster with 1 SH, 1 master, 2 indexers and 2 heavy forwarders.   I have Cisco security suite installed on the HF and the data visualization is displaying correctly.  I am looking for assistance to display the data in the SH.   cisco devices send logs to HF.  HF is configured to route traffic to indexers and all is working fine.  Search results showed up on SH but can’t get the Cisco security suite app to display the data.  Any help would be greatly appreciated. Thanks!  
Couple of times the TA-ObserveIT caused Splunk to shut itself down. How can it be? We see the following in the splunkd.log -     10-02-2020 07:41:27.869 -0400 ERROR ExecProcessor - message from "... See more...
Couple of times the TA-ObserveIT caused Splunk to shut itself down. How can it be? We see the following in the splunkd.log -     10-02-2020 07:41:27.869 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" File "/opt/apps/splunk/etc/apps/TA-ObserveIT/bin/ta_observeit/solnlib/packages/splunklib/binding.py", line 1221, in request 10-02-2020 07:41:27.869 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" raise HTTPError(response) 10-02-2020 07:41:27.869 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" HTTPError: HTTP 500 Internal Server Error -- {"messages":[{"type":"ERROR","text":"External handler failed with code '-1' and output: ''. See splunkd.log for stderr output."}]} 10-02-2020 07:41:27.903 -0400 ERROR ExecProcessor - message from "python /opt/apps/splunk/etc/apps/TA-ObserveIT/bin/observeit_api.py" ERRORHTTP 500 Internal Server Error -- {"messages":[{"type":"ERROR","text":"External handler failed with code '-1' and output: ''. See splunkd.log for stderr output."}]} 10-02-2020 07:41:27.946 -0400 INFO PipelineComponent - Performing early shutdown tasks 10-02-2020 07:41:27.950 -0400 INFO IndexProcessor - handleSignal : Disabling streaming searches. 10-02-2020 07:41:27.951 -0400 INFO IndexProcessor - request state change from=RUN to=SHUTDOWN_SIGNALED 10-02-2020 07:41:27.951 -0400 INFO IndexProcessor - handleSignal : Disabling streaming searches. 10-02-2020 07:41:27.951 -0400 INFO IndexProcessor - request state change from=RUN to=SHUTDOWN_SIGNALED 10-02-2020 07:41:27.951 -0400 INFO UiHttpListener - Shutting down webui 10-02-2020 07:41:27.961 -0400 INFO UiHttpListener - Shutting down webui completed 10-02-2020 07:41:28.689 -0400 INFO IndexProcessor - ingest_pipe=0: active realtime streams have hit 0 during shutdown 10-02-2020 07:41:28.849 -0400 INFO IndexProcessor - ingest_pipe=1: active realtime streams have hit 0 during shutdown  
Hi everyone, does anyone know a way of running splunk admin commands via a script. I need to run ./splunk reload deploy-server via cron job
I have an event ingesting to splunk via HEC which is around 13k characters, and approx. 260 fields within the json of the event. Currently, we do not see all the fields being extracted with auto kv a... See more...
I have an event ingesting to splunk via HEC which is around 13k characters, and approx. 260 fields within the json of the event. Currently, we do not see all the fields being extracted with auto kv at search time, and I do not want to have these as indexed fields because it would balloon the index size greatly to do so. In some other non-json events that are rather large we have increased the limits.conf/[kv] maxchars value up to 100000 to allow for key value pairs to be extracted as expected by users in larger events. I figured that this new scenario with JSON was similar and I have so far increased within the same stanza; limit = 0 (unlimited), maxcols = 1024, avg_extractor_time = 3000, max_extractor_time = 6000. So far after these updates I am still not seeing all fields extracted. I as well tried using spath on the entire _raw which did not work, so I upped the limtits.conf/[spath] stanza to an extraction_cutoff = 100000. Similarly it did not extract when doing the whole raw. I could call a specific field within " spath path=<field_name>" but I do not want to do that for 50+ fields, especially if more are added or removed at a later date. I have been trying to consider if the issue is occurring on ingestion via HEC, but these are all at search time and is not indexed extractions.   Are there any other configurations anyone knows of around auto kv extraction that we should look into testing with an increased limit? For the best user experience I want this to all continue to happen automatically and not call out many fields explicitly in an spath.
Hi, I'm trying to replace the blank values ​​in my query with 0s. If the Extension has no record in the log, it must appear zero in the count. I tried with filnull but I was not successful, another ... See more...
Hi, I'm trying to replace the blank values ​​in my query with 0s. If the Extension has no record in the log, it must appear zero in the count. I tried with filnull but I was not successful, another thing is that I would like to always order in a descending way how could I do?   | inputlookup ramais.csv | fields - Site | join type=left Ramal [search index=raw_ramais | rex field=_raw "EXTENSION:(?<Ramal>\+?\d+)" | stats count by Ramal ]   Result: Ramal count 1111111   2222222 65 3333333   4444444   5555555   6666666 36
Having issues with splitting the complete search between "basesearch" and "remaining search in other panels".   Complete Search ############# index=Temp_Index="http:hec_splunk" sourcetype="json:s... See more...
Having issues with splitting the complete search between "basesearch" and "remaining search in other panels".   Complete Search ############# index=Temp_Index="http:hec_splunk" sourcetype="json:script_output" "Source Team"="UNIX_SA" | where like('Region', "%APAC%") | stats sum(TotalSpace) AS Total sum(UsedSpace) AS Used sum(AvailableSpace) AS Available | eval Total=round(Total/1024,0) | eval Used=round(Used/1024,0) | eval Available=round(Available/1024,0) | table Used,Available | transpose | eval Used=Used."(".Used."%)"   Below split is the only working search. but this won't work for me.   Working XML Code ##############   <row>   <panel depends="$nevershowup$">     <event>        <title>BASE SEARCH PANEL</title>            <search id="baseSearch">                <query>index=Temp_Index="http:hec_splunk" sourcetype="json:script_output" "Source Team"="UNIX_SA" | where like('Region', "%APAC%") | stats sum(TotalSpace) AS Total sum(UsedSpace) AS Used sum(AvailableSpace) AS Available | eval Total=round(Total/1024,0) | eval Used=round(Used/1024,0) | eval Available=round(Available/1024,0) | table Used,Available | transpose | </query>                 <earliest>$timepicker.earliest$</earliest>                 <latest>$timepicker.latest$</latest>               </search>             <option name="list.drilldown">none</option>           </event>         </panel>   </row> <row>    <panel>       <title>NAM Region</title>        <chart>           <search base="baseSearch">              <query> eval Used=Used."(".Used."%)"</query>                 </search>                  <option name="charting.chart">pie</option>                  <option name="charting.chart.showDataLabels">all</option>                  <option name="charting.chart.showPercent">true</option>                  <option name="charting.chart.stackMode">stacked100</option>                  <option name="charting.drilldown">none</option>                 <option name="charting.legend.placement">top</option>                 <option name="refresh.display">progressbar</option>               </chart>             </panel>           </row>     The way i want it to work but not working ############################# The reason is, i have many panels and the common string in all panel is "index=Temp_Index="http:hec_splunk" sourcetype="json:script_output" "Source Team"="UNIX_SA" and i want to use this in base search. I tried using "| fields *" from the other solutions suggested in splunk community but is it not working.   <row>   <panel depends="$nevershowup$">     <event>        <title>BASE SEARCH PANEL</title>            <search id="baseSearch">                <query>index=Temp_Index="http:hec_splunk" sourcetype="json:script_output" "Source Team"="UNIX_SA"  </query>                 <earliest>$timepicker.earliest$</earliest>                 <latest>$timepicker.latest$</latest>               </search>             <option name="list.drilldown">none</option>           </event>         </panel>   </row> <row>    <panel>       <title>NAM Region</title>        <chart>           <search base="baseSearch">              <query> | where like('Region', "%APAC%") | stats sum(TotalSpace) AS Total sum(UsedSpace) AS Used sum(AvailableSpace) AS Available | eval Total=round(Total/1024,0) | eval Used=round(Used/1024,0) | eval Available=round(Available/1024,0) | table Used,Available | transpose | eval Used=Used."(".Used."%)"</query>                 </search>                  <option name="charting.chart">pie</option>                  <option name="charting.chart.showDataLabels">all</option>                  <option name="charting.chart.showPercent">true</option>                  <option name="charting.chart.stackMode">stacked100</option>                  <option name="charting.drilldown">none</option>                 <option name="charting.legend.placement">top</option>                 <option name="refresh.display">progressbar</option>               </chart>             </panel>           </row>
Hi,   I would like to build my own Splunk User behavior app. Can you guide me through the steps on building it by providing a references and other sites that can help me build it.  Thank you,
Hi, I need help converting my splunk IPV4 to Ipv6. The questions have are: 1.What changes need to be made to each server to use Ipv6? 2. What changes need to be made to the splunk deployment serve... See more...
Hi, I need help converting my splunk IPV4 to Ipv6. The questions have are: 1.What changes need to be made to each server to use Ipv6? 2. What changes need to be made to the splunk deployment server? 3.What changes need to be made to each splunk deployment client.? 4. Are there system changes that need to happen by an admin to let splunk work ipv6? I would greatly appreciate if each of these questions can be answered to help convert my splunk ipv4 to ipv6 step by step.   Thank you,
Apologies if these are very basic questions but I am new to the API and the SDK.  I am running the script below following the guidelines provided in the documentation, but I am getting the following ... See more...
Apologies if these are very basic questions but I am new to the API and the SDK.  I am running the script below following the guidelines provided in the documentation, but I am getting the following error.  Can anyone point me in the correct direction? https://docs.splunk.com/Documentation/Splunk/8.1.0/Search/ExportdatausingSDKs ConnectionRefusedError: [WinError 10061] No connection could be made because the target machine actively refused it     import splunklib.client as client import splunklib.results as results HOST = PORT = 8089 USERNAME = PASSWORD = service = client.connect( host=HOST, port=PORT, username=USERNAME, password=PASSWORD) rr = results.ResultsReader(service.jobs.export("search index=_internal earliest=-1h | head 5")) for result in rr: if isinstance(result, results.Message): # Diagnostic messages might be returned in the results data=(result.type, result.message) string_format="%s:%s" print(string_format % data) elif isinstance(result, dict): # Normal events are returned as dicts print(result) assert rr.is_preview == False    
Hello, I have read the documentation on routing and filtering events (https://docs.splunk.com/Documentation/Splunk/8.1.0/Forwarding/Routeandfilterdatad), but can you tell me if I'm going in the righ... See more...
Hello, I have read the documentation on routing and filtering events (https://docs.splunk.com/Documentation/Splunk/8.1.0/Forwarding/Routeandfilterdatad), but can you tell me if I'm going in the right direction ? Here is my scenario: I have 1 Heavy Forwarder that receives events from multiple Universal Forwarders, the HF then forwards the events to an Indexer. UFs -> HF -> IDX There is a new data source that we want to collect from one of the Universal Forwarders, but we don't want this data to be sent to the Indexer yet. We would like to index it on the Heavy Forwarder first because we need to test out some data anonymization on the events before sending them to the Indexer.  All other data collected by the Universal Forwarders should still be sent to the Indexer as usual. Our HF is configured with a tcpout defaultGroup pointing to the Indexer. What I understand is that I have to configure this on my Univeral Forwarder for the input i want to index locally: [monitor:///mydata/source1.log] _INDEX_AND_FORWARD_ROUTING=local ... If I understand correctly: these events will be indexed locally on the HF, but they will also be forwarded to the Indexer because of the defaultGroup. Would it be a good solution to use props/transforms on the HF to change the _TCP_ROUTING of these events  (to give them an incorrect output group like "NoForward" for instance) ? Or is there a another/simpler solution ? Thank you !
Looking to write a search that filters mount drives. For example, the values for the field "mount" are "C:" "D:" "F" "harddiskvol1" "harddiskvol2" .... etc. How can I write a search that returns eve... See more...
Looking to write a search that filters mount drives. For example, the values for the field "mount" are "C:" "D:" "F" "harddiskvol1" "harddiskvol2" .... etc. How can I write a search that returns events ONLY where the mount is a letter, i.e. "C:" "D:", etc. and excludes "harddiskvol1" or anything that does not follow "<letter>:"?   Thanks in advance.
Hi all, I have a cluster with 2 indexers, plus a cluster master in a different server. For some reasons that I don't know (since we've inherited the environment), the two indexers store hot/warm buc... See more...
Hi all, I have a cluster with 2 indexers, plus a cluster master in a different server. For some reasons that I don't know (since we've inherited the environment), the two indexers store hot/warm buckets locally and cold buckets in a NAS (same NAS for both indexers). We have to perform an activity on that NAS, so that it will be unavailable for a period of 3-4 hours.  My question is, putting the indexers in maintenance mode do I stop cold buckets from being generated? Or do you have any suggestion to perform this activity avoiding any data loss? Sorry but from Splunk documentation I can't find sure informations about this activity on both indexers at the same time. Thanks in advance.
I need an automated health check to run on a website I support. Can Splunk check that login (with static login details) works successfully and also check each link on the website to ensure everything... See more...
I need an automated health check to run on a website I support. Can Splunk check that login (with static login details) works successfully and also check each link on the website to ensure everything is up and running normally? I would like this to run as frequent as possible (depending on server performance), thinking every minute if possible, or at least every 10 minutes. Is this possible with Splunk Enterprise standard? Or do I need an add on? Or is it simply not possible on Splunk? If it is possible.. What basic steps should I take to set up?   Thanks
Hi Team, We are using Splunk Enterprise - Splunk Partner NFR License, We have added License. Delayed in adding the license. License was added successfully but we are getting following Licensing aler... See more...
Hi Team, We are using Splunk Enterprise - Splunk Partner NFR License, We have added License. Delayed in adding the license. License was added successfully but we are getting following Licensing alerts Current    1 Pool Violation reported by 1 indexer 1 pool warning reported by 1 indexer  Permanent 36 pool quota overage warnings reported by 1 indexer  This pool has exceeded its configured poolsize=0 bytes. A warning has been recorded for all members. Also When performing any search, getting this Error. Error in 'litsearch' command: Your splunk license expired or you have exceeded your license limit too many times. Renew your splunk license by visiting www.splunk.com/store or calling 866.GET.SPLUNK. 1. Splunk Enterprise - Splunk Partner NFR License creation_time 2020-09-07 05:30:00+05:30 expiration_time 2021-09-07 05:30:00+05:30   Help me to resolve above License alerts
I'm running Splunk Enterprise Version 8.0.2.1 in a distributed environment with 3 search heads and 8 indexers. I've created a data model with child datasets for Pivot users, and I noticed two bugs. ... See more...
I'm running Splunk Enterprise Version 8.0.2.1 in a distributed environment with 3 search heads and 8 indexers. I've created a data model with child datasets for Pivot users, and I noticed two bugs. First, while I can hide fields from Pivot users at the parent dataset level, I am not able to hide child fields. I can mark them as hidden if they are a child of a parent dataset, but when I actually go into the Pivot view, the fields still show up as optional Split by fields.  Second, if I create a child beneath an existing child dataset (i.e., 3rd level child), and I mark its fields a hidden, their status in Splunk Web returns them to Shown as soon as I click away and return to the same settings in the Data Model editor. Any thoughts, ideas, solutions?
I want difference between 155 and 132, how can i do with the Spl.  
How can i trigger my alert with different conditions on the result? i want my alert not to be triggered if i have more than 9 results. so i put this:  search count<9. The problem is, even when the... See more...
How can i trigger my alert with different conditions on the result? i want my alert not to be triggered if i have more than 9 results. so i put this:  search count<9. The problem is, even when there is no result in the search, like 0, the alert keep being triggered. so i try this:  search count>0 and count<9.  And it doesn't work.. can someone please help me?
hello   i use the search below which works fine | inputlookup lookup_patch | lookup fo_all HOSTNAME as host output SITE DEPARTMENT RESPONSIBLE | stats dc(host) as host by SITE DEPARTMENT RESPONSI... See more...
hello   i use the search below which works fine | inputlookup lookup_patch | lookup fo_all HOSTNAME as host output SITE DEPARTMENT RESPONSIBLE | stats dc(host) as host by SITE DEPARTMENT RESPONSIBLE | stats sum(host) as NbNonCompliantPatchesIndHost by SITE DEPARTMENT RESPONSIBLE | append [| inputlookup lookup_patch | lookup fo_all HOSTNAME as host output SITE DEPARTMENT RESPONSIBLE | stats dc(host) as NbIndHost by SITE DEPARTMENT RESPONSIBLE ] | search SITE=* | search DEPARTMENT=* | search RESPONSIBLE_USER=* | stats values(*) as * by SITE DEPARTMENT RESPONSIBLE | eval Perc=round((NbNonCompliantPatchesIndHost/NbIndHost)*100,2) | table Perc, NbIndHost, NbNonCompliantPatchesIndHost    As you can see, I use token filters for SITE, DEPARTMENT and RESPONSIBLE fields The problem I have is when I keep * in this token filters  For example, if I choose * for SITE but I choose a name for RESPONSIBLE, I would be able calculate and to cumulate the Perc value and as a consequence the NbIndHost and the NbNonCompliantPatchesIndHost values for this RESPONSIBLE  Is it possible to do this? Thanks