All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Is it possible to know which user create/write a line in my lookup by doing a search with  outputlookup command like :: index=A | table id name | outputlookup my_lookup.csv   Thanks for yo... See more...
Hi, Is it possible to know which user create/write a line in my lookup by doing a search with  outputlookup command like :: index=A | table id name | outputlookup my_lookup.csv   Thanks for your help !
I have inherited a SPlunk environment and one thing i've noticed is that one all of our heavy forwarders are using useACK.  Is this best practice? we are seeing duplicate events due to this, and I wa... See more...
I have inherited a SPlunk environment and one thing i've noticed is that one all of our heavy forwarders are using useACK.  Is this best practice? we are seeing duplicate events due to this, and I wanted to see if I should work on tailoring the config for indexer acknowledgement, or if I should just disable it and move on. 
Hello, I have the following table item attribute first_seen last_seen item1 attr1 01.01.1970 01.01.2000 item2 attr2 01.01.1970 01.01.2020 item3 attr3 01.01.2021 01.01.20... See more...
Hello, I have the following table item attribute first_seen last_seen item1 attr1 01.01.1970 01.01.2000 item2 attr2 01.01.1970 01.01.2020 item3 attr3 01.01.2021 01.01.2021   and I would like to count the last two fields based on their value (as in the table below).  date count(first_seen) count(last_seen) 01.01.1970 2 0 01.01.2000 0 1 01.01.2020 0 1 01.01.2021 1 1   I have tried to achieve this by using two searches and appending them, but I'm stuck at grouping them by date, which is a value of the 2 fields which can be common or not.       .......... | stats count by last_seen | append [ .......... | stats count by first_seen]     gives me the table below, which is a step forward, but far from what I need last_seen count first_seen 01.01.1970 0   01.01.2000 1   01.01.2020 1   01.01.2021 1     2 01.01.1970   0 01.01.2000   0 01.01.2020   1 01.01.2021   can anyone give me a hint? cheers 
While using the table for bro conn data, I am getting duplicate data; however, if I use mvdedup, I get all the desired results except the id.orig_h and id.resp_h. If I use mvdedup for these two entri... See more...
While using the table for bro conn data, I am getting duplicate data; however, if I use mvdedup, I get all the desired results except the id.orig_h and id.resp_h. If I use mvdedup for these two entries, I get blank values.   index=bro_conn | eval id.orig_h=mvdedup(id.orig_h) | eval id.resp_h=mvdedup(id.resp_h) | eval dest_port=mvdedup(dest_port) | eval resp_ip_bytes=mvdedup(resp_ip_bytes) | eval orig_ip_bytes=mvdedup(orig_ip_bytes) | eval conn_state=mvdedup(conn_state) | eval proto=mvdedup(proto) | table _time id.orig_h id.resp_h dest_port orig_ip_bytes resp_ip_bytes conn_state       Here are the results if I don't use mvdedup:       Is there a better way to dedup the results, and if this is the right way, how can I make this work for the src and dest IPs? Thank you!
Hi , I have created my own extension for load of a machine, I know they already exist but it is a learning curve for me. The problem I have is the values of the metrics are 0.2 and I get an error o... See more...
Hi , I have created my own extension for load of a machine, I know they already exist but it is a learning curve for me. The problem I have is the values of the metrics are 0.2 and I get an error of : "WARN MetricOutputHandler - Caught NumberFormatException due to invalid metric value: 0.30 It must be a valid long. Full monitor line: name=Custom Metrics|LOAD_Monitor|LOAD_AVG_1min, value=0.30" Is there a way I can return decimals? Thanks Eddie
I got an issue with a few field alias on extracted fields from a json log  This is done on a search head and done in the web gui props.conf reads [surcetype_name] FIELDALIAS-<class> = field_name1... See more...
I got an issue with a few field alias on extracted fields from a json log  This is done on a search head and done in the web gui props.conf reads [surcetype_name] FIELDALIAS-<class> = field_name1 ASNEW alias1 field_name2 ASNEW alias2 field_name3 ASNEW alias3 field_name4 ASNEW alias4 EXTRACT-<class> = regex string to get field_name1 EXTRACT-<class> = regex string to get field_name2 EXTRACT-<class> = regex string to get field_name3 EXTRACT-<class> = regex string to get field_name4 According to https://docs.splunk.com/Documentation/Splunk/7.3.3/Knowledge/Searchtimeoperationssequence As this is done in a searchhead the EXCTRACT-<class> is performed before FIELDALIAS-<class> But there is only one of the aliases that returns data, when I do a table with the field_name and alias side by side I see that the exctraction is done right and there is data, but the alias is blank. Anyone got a tip to what it can be that I got wrong here?
Hello, @rnowitzki  @renjith_nair  could you help me on the following question please: I index every day at 6 p.m. splunk events from an ITRS database. each event is made up of a critical alert an... See more...
Hello, @rnowitzki  @renjith_nair  could you help me on the following question please: I index every day at 6 p.m. splunk events from an ITRS database. each event is made up of a critical alert and an OK alert generated by ITRS. the events concern a pair of servers (Active and passive) I have to define a search with a condition like this: if server1 K0 AND server2 K0 AND server1 0K after server2 is K0 then I calculate the time of the last K0 servers and the first 0K server this corresponds to the downtime of my service but only if both servers are KO'd at the same time. i.e if server1 K0 at 3pm AND server2 K0 at 3:30 p.m. BUT server1 0K at 3:20 p.m. then do nothing i have this :  index=index (severity=2 OR severity=0 OR severity="-1" OR severity=1) server=server1 OR server=server2 | eval ID=Service+"_"+Env+"_"+Apps+"_"+Function+"_"+managed_entity+"_"+varname | addinfo | eval periode=info_max_time-info_min_time | transaction ID startswith=(severity=2) maxevents=2  i dont now how to create condition Thank you for your help
I've deployed a honeypot on my Raspberry PI with Ubuntu utilising Cowrie and I now have some log files I would like to send over to Splunk Enterprise. I'm having some problems in successfully receiv... See more...
I've deployed a honeypot on my Raspberry PI with Ubuntu utilising Cowrie and I now have some log files I would like to send over to Splunk Enterprise. I'm having some problems in successfully receiving and accessing the logs on the Splunk Enterprise web interface. Currently my inputs.conf and outputs.conf files look like this and I'm unsure to as if I'm missing any stanzas or fields required. inputs:conf [monitor:///home/ubuntu/cowrie/var/lib/cowrie/tty] disabled = 0 ouputs.conf [tcpout] defaultGroup=my_indexers # new stanza [tcpout:my_indexers] server='my ip':9997   Am I missing any host fields and how do I go about looking for the logs on the Splunk Enterprise section, I am accessing the Splunk Enterprise web interface by entering 'localhost:8000.....' I'm under the impression I might of missed a step somehow, I downloaded and extracted the Universal Forwarder then made a splunk user account, started splunk, edited the conf files... is that all that is required or do I need to edit settings on the enterprise side? All I'm seeing when I enter 'index=main' in the search field is only 1 log file when multiple exist in the directory, is my syntax in the .conf file incorrect?      
I want to use a REST Endpoint which gets accessed by a HTML Button and returns an file from the server, so that a "save file" dialog appears to the clients browser.   Is this possible and how?
Hi all, I have 3 search heads as a part of search head cluster and 5 indexers in the indexer cluster and also my search heads are also part of this indexer cluster. I'm upgrading my splunk infrastrc... See more...
Hi all, I have 3 search heads as a part of search head cluster and 5 indexers in the indexer cluster and also my search heads are also part of this indexer cluster. I'm upgrading my splunk infrastrcture from 6.63 to 7.2.Should all the search heads and indexers be upgraded at the same time or can i upgrade the search head cluster first and indexer cluster later?? The problem here is search heads are also part of indexer cluster if i upgrade the search heads will i run into problem with different versions of splunk in indexer cluster searchhead cluster components sh1 sh2 sh3 indexer cluster idx1 idx2 idx3 idx4 idx5 sh1 sh2 sh3   @isoutamo 
Hi Team, How Can we create Refresh Button for a Dashboard. I have created one submit button like below. But seems like its not working properly. <fieldset submitButton="true" autoRun="true"> <inp... See more...
Hi Team, How Can we create Refresh Button for a Dashboard. I have created one submit button like below. But seems like its not working properly. <fieldset submitButton="true" autoRun="true"> <input type="time" token="field1" searchWhenChanged="true"> <label>Date/Time</label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> <input type="text" token="process_tok1"> <label>Processor Id</label> <default>*</default> </input> <input type="text" token="ckey" searchWhenChanged="true"> <label>Parent Chain</label> <default></default> <prefix>parent_chain="*</prefix> <suffix>*"</suffix> <initialValue></initialValue> </input> <input type="text" token="usr"> <label>User</label> <default>*</default> </input> </fieldset> Can anyone guide me on this.
Is there a way take existing index and create from it a new index with aggregating search? meaning taking existing index and using search to create a new aggregated index.
Hi Splunkers ,   We have decided to use S3 as centralized collection of logs from various sources. I have the following questions to have have the best way to get data into Splunk from S3.  Daily i... See more...
Hi Splunkers ,   We have decided to use S3 as centralized collection of logs from various sources. I have the following questions to have have the best way to get data into Splunk from S3.  Daily ingestion of logs will be around 5 Gb. There will be multiple inputs needs to be created to read each s3 buckets.   Questions : 1. I see there are 2 Addon's available.  Splunk Add-on for AWS and  Splunk Add-on for Amazon Kinesis Firehose. Will both Add-on supports reading data from s3? If so which is the best method? 2. Splunk Add on Aws has 3 input types : Generic S3 , Incremental S3 , SQS based S3. Which will be the best method to ingest data from multiple directories and around 5Gb per day. 3. Recommended hardware for Heavy forwarder.   Thanks in Advance.    
Could anyone let me know how to fix the below error in Splunk search heads... ERROR TsidxStats - sid:summarize_XXXXX Failed to contact the server endpoint https://127.0.0.1:8089 from touchSummary()
The Splunk app AbuseIPdb_check (https://splunkbase.splunk.com/app/4903) is not working as expected after copying the config.json file to this app's local directory and putting my AbuseIPDB API key. ... See more...
The Splunk app AbuseIPdb_check (https://splunkbase.splunk.com/app/4903) is not working as expected after copying the config.json file to this app's local directory and putting my AbuseIPDB API key. I have tried with syntax as below - | makeresults | eval ip="94.201.237.206" | abuseip ipfield=ip | makeresults | eval ip="94.201.237.206" |abuseip(ip) The error on Splunk web is -- Error in 'script': Get info probe failed for external search command 'abuseip'.  i did not find anything relevant as a pointer when checked in Splunk _internal logs for this.  Under all configuration "abuseip" is mentioned as config type - command with enabled status and global sharing permissions.    Has it worked for anyone? any direction/solution pointer would be appreciable.  
Hi Splunk folks, I am getting the above errors with in  my _internals logs.  nothing implies to me from this post https://community.splunk.com/t5/Getting-Data-In/Getting-Error-from-TailReader/m-p... See more...
Hi Splunk folks, I am getting the above errors with in  my _internals logs.  nothing implies to me from this post https://community.splunk.com/t5/Getting-Data-In/Getting-Error-from-TailReader/m-p/356760 I have checked that file exits on host but UF is unable to read and send to indexers. I have checked permissions, the header is same for all the file, and there are plenty of resources on the host and this is the only app running on the host where UF is running. we have distributed deployment. here comes the weird part....we miss some data(files) for few days and then it works fine for few days... so when we find the files which are not being ingested to indexers from UF, we manually touch the file from backend(which change the time stamp) and then we get success. but there too many to touch and so time consuming. props.conf [salesforce_csv_input] TZ = GMT SHOULD_LINEMERGE = false TRUNCATE = 60000 pulldown_type = true INDEXED_EXTRACTIONS = csv CHECK_FOR_HEADER = true KV_MODE = none category = Structured NO_BINARY_CHECK = true FIELDALIAS..... FIELDALIAS..... FIELDALIAS.... FIELDALIAS..... FIELDALIAS..... FIELDALIAS... inputs.conf [monitor:///path/to/app/*-eventlogfile-splunk.csv] index = salesforce sourcetype = salesforce_csv_input disabled = 0 initCrcLength = 1024
Hello, I am evaluating the Splunk Enterprise on our environment and I have setup a on-premises server for that. Currently I am trying to configure the MS Windows AD Objects apps on splunk. The proc... See more...
Hello, I am evaluating the Splunk Enterprise on our environment and I have setup a on-premises server for that. Currently I am trying to configure the MS Windows AD Objects apps on splunk. The procedures have been carefully followed. However, it is failed on the last step - building lookups. When the process was trying to build AD_Obj_Admin_Audit lookup, the below error was reported: Warning: No Windows Change Events Found - Change Time Period Due to the error, the setup cannot be completed. I have changed the time period up to 5 years but still no luck. It would be appreciated if you could help us to troubleshoot the issue. Thanks
Hello, I would like to find out real time network traffic(bps) from SNMP log. I collect log from 3 hosts but they send it irregularly like below. This is my SPL query and results. How do I make q... See more...
Hello, I would like to find out real time network traffic(bps) from SNMP log. I collect log from 3 hosts but they send it irregularly like below. This is my SPL query and results. How do I make query to see the total average traffic of 3 hosts for one minute(bps). index=my_network sourcetype=snmp_trap_log (host="10.0.0.1" OR host="10.0.0.2" OR host="10.0.0.3") traffic=* | table _time, host, traffic _time host traffic(Mbyte per second) 2021/02/10 00:04:48 10.0.0.1 0.29 2021/02/10 00:04:34 10.0.0.3 0.34 2021/02/10 00:04:30 10.0.0.2 0.20 2021/02/10 00:04:28 10.0.0.1 0.25 2021/02/10 00:04:25 10.0.0.3 0.31 2021/02/10 00:04:13 10.0.0.3 0.32 2021/02/10 00:04:08 10.0.0.1 0.22 2021/02/10 00:04:02 10.0.0.3 0.28 2021/02/10 00:04:00 10.0.0.2 0.16
What does is the logic behind below stanzas under   splunk/etc/apps/customappname/metadata/local.meta [app/ui] version=7.3.7 modtime=epoch time   [app/launcher] version=7.3.7 modtime=epoch t... See more...
What does is the logic behind below stanzas under   splunk/etc/apps/customappname/metadata/local.meta [app/ui] version=7.3.7 modtime=epoch time   [app/launcher] version=7.3.7 modtime=epoch time   What is it referring to ? and I suppose these settings are app local only and will not be exported globally since there is no  export=system here?   Thanks
I'm running a heavy forwarder on Redhat which I recently upgraded to Splunk Enterprise 8.1.1. Most apps survived the upgrade without issue. The Splunk estreamer app (https://splunkbase.splunk.com/app... See more...
I'm running a heavy forwarder on Redhat which I recently upgraded to Splunk Enterprise 8.1.1. Most apps survived the upgrade without issue. The Splunk estreamer app (https://splunkbase.splunk.com/app/3662/) however, doesn't seem to be working anymore. It works for a little while, but then I get the following:   Monitor ERROR [no message or attrs]: ProxyProcess[name=subscriberParser].request(status) timeout This is often appears soon after this: ERROR [no message or attrs]: 'View' object has no attribute '_View__isHex'\n'View' object has no attribute '_View__isHex'Traceback (most recent call last):\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/baseproc.py", line 209, in receiveInput\n self.onReceive( item )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 350, in onReceive\n _do( items )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 344, in _do\n self.onEvent( item )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 338, in onEvent\n decorate( item['record'], self.settings )\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/pipeline.py", line 185, in decorate\n settings.cache(), record ).create()\n File "/opt/splunk/etc/apps/TA-eStreamer/bin/encore/estreamer/metadata/view.py", line 532, in create\n if(self.__isHex(hex32)) :\nAttributeError: 'View' object has no attribute '_View__isHex'\n I've tried downloading the latest version of the app, no change. To get it working again, I have to disable the 3 scripts that bring in the data, kill the PID's running the estreamer, then re-enable the scripts. Sometimes it works again for a few hours. Sometimes a few minutes. Any suggestions?  Also, something worth mentioning: I noticed when I go to manage apps, there's no "set up" option for this add on or the firepower splunk app, which is normally where I'd do the config for this.