All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I have a python script which makes an API call and get the events . Number of events, its collecting are correct however its adding duplicate entries per field. Can you please assist, I am ? Here is... See more...
I have a python script which makes an API call and get the events . Number of events, its collecting are correct however its adding duplicate entries per field. Can you please assist, I am ? Here is my script response = helper.send_http_request(rest_url, 'GET' ,parameters=queryParam, payload=None,headers=headers, cookies=None,verify=False, cert=None, timeout=None, use_proxy=False) r_headers = response.headers r_json = response.json() r_status = response.status_code if r_status !=200:     response.raise_for_status() final_result = [] for _file in r_json:     responseStr=''     fileid = str(_file["fileid"])     state = helper.get_check_point(str(fileid))     if state is None:         final_result.append(_file)         helper.save_check_point(str(str(fileid)), "Indexed") event=helper.new_event(json.dumps(final_result), time=None, host=None, index=None, source=None, sourcetype=None, done=True, unbroken=True) ew.write_event(event) response: [ { "fileid": "abc.txt", "source": "source1", "destination": "dest1", "servername": "server1", }, { "fileid": "xyz.txt", "source": "source2", "destination": "dest2", "servername": "server2", } ] Response after collecting data to Index looks as below: fileid source destination servername "abc.txt abc.txt source1 source1 dest1 dest1 server1 server1 xyz.txt xyz.txt source2 source2 dest2 dest2 server2 server2
Starting our journey into Splunk and need some help. I am trying to send and alert when a new version of antivirus is installed on our machines. I am monitoring the application windows event log, so... See more...
Starting our journey into Splunk and need some help. I am trying to send and alert when a new version of antivirus is installed on our machines. I am monitoring the application windows event log, so it would be something like grab the version from 20 minutes ago in the logs and if different than current version send the alert. "Message=Windows Installer installed the product. Product Name: Antivirus Software. Product Version: 1.0.0.000.1. Product Language: 001. Manufacturer: Antivirus. Installation success or error status: 0" Any ideas on how to start this search?
Hi, I have a query like this: index=star eventtype=login-history action=success Username=** | stats count by Username | sort - count | head 10 So in my result I have a list of username with the... See more...
Hi, I have a query like this: index=star eventtype=login-history action=success Username=** | stats count by Username | sort - count | head 10 So in my result I have a list of username with the login count for each one. I know some users are bot, so I want to add a string before the username like BOT_Username, probably with if condition. For example, in my result I have: Alice  10 Bob   8 Carol    7 David   4 Eddie  2 I know Alice and Bob are bot, so I need: BOT_Alice  10 BOT_Bob   8 Carol    7 David   4 Eddie  2 Thanks in advance!  
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Ind... See more...
I currently have a Splunk cluster that looks like this: Splunk CentOS Version Splunk Version Master 7.5 7.0.0 Forwarder 7.5 Universal Forwarder 6.6.3 Search Head 6.5 7.0.0 Indexer 1 6.5 7.0.0 Indexer 2 6.5 7.0.0 Indexer 3 6.5 7.0.0 Indexer 4 6.5 7.0.0   I have 4 new servers that I want to build as indexers to replace the 4 indexers that I currently have (while moving from Splunk 7.0.0 to Splunk 8.x in the process). My first plan was to build the 4 new indexers and then join them to the current indexer cluster so that existing data could replicate between them and then I could retire the old indexers from the cluster. The problem with this idea is that my existing indexers are running CentOS 6.x and my new indexers will be running CentOS 7.x - as I understand it from reading documentation it is not possible to have different OS versions running in the same indexer cluster. As there is no easy way to upgrade from CentOS 6.x to CentOS 7.x, I'd rather avoid having to do this. So my next idea was to build the new indexers as a separate cluster - so that I have one cluster of CentOS 6.x indexers and one cluster of CentOS 7.x indexers - then send new data only to the new cluster and let old data eventually age off on the old cluster at which point I can retire it. During this time I will have a Search Head that searches across both clusters, so that old and new data is searchable. My question on that is: Will my Search Head running on CentOS 6.x be able to search across both a CentOS 6.x indexer cluster and a CentOs 7.x indexer cluster? Should I instead look at creating the second cluster of indexers and then manually migrating data between the old and new clusters? Although that seems like a harder process. How would you approach this?  
Hi everyone, I have strange Splunk behavior regarding one of the indexes but first a little bit of background: Environment is indexer cluster with 1 SH Proxy logs are getting ingested from syslog... See more...
Hi everyone, I have strange Splunk behavior regarding one of the indexes but first a little bit of background: Environment is indexer cluster with 1 SH Proxy logs are getting ingested from syslog server via universal forwarder (monitor input) Monitor input uses host_segment option to extract host data Sourcetype is set to "cisco:wsa:squid" from splunkbase app "Splunk_TA_cisco-wsa". I'm not using any local configuration for that sourcetype (on any instance) There are no props.conf stanzas that apply configuration based on source or host (i.e. [host::something]) for this specific source or host The issue: When I'm using the search 1 (with field "host") in fast mode it is 10 to 20 times slower than using search 2. Search 1   index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, host, index, splunk_server, _raw     Search 2   index=my_index sourcetype=cisco:wsa:squid| fields _time, _indextime, source, sourcetype, index, splunk_server, _raw   I have already reviewed full configuration an there is no configuration on any of the instances that is modifying field "host" in any way and when I use it in my search it is drastically slower which is causing issues further down the line. This issue does not manifest on other indexes. All indexes are configured with same options in indexes.conf Hope someone can give me a good clue for troubleshooting.
I was trying to onboard data from Cisco Meraki when I've noticed the following: The Splunk Add-on for Cisco Meraki is currently getting data from only the first network_id from the Organization. An... See more...
I was trying to onboard data from Cisco Meraki when I've noticed the following: The Splunk Add-on for Cisco Meraki is currently getting data from only the first network_id from the Organization. Anyone could help me understand how could we enhance the code so that it will fetch data from all the networks in the Organization?
Hi, I have some single string log statements looking like the following:   INFO ControllerNameHere f1d46561-b382-4685-9d7a-ebd76f40c355 EXT | <action> | Time 80   I want to make a query that gro... See more...
Hi, I have some single string log statements looking like the following:   INFO ControllerNameHere f1d46561-b382-4685-9d7a-ebd76f40c355 EXT | <action> | Time 80   I want to make a query that groups the <action> types and then calculates the min, max and avg of the Time part of the string. So far i have had success with average:   index=my_index* host=hostName EXT | rex field=_raw "(EXT \| )(?<CategoryString>.+)( \| Time)" | rex field=_raw "(\| Time)(?<TimeValue>.+)" | stats mean(TimeValue) BY CategoryString   which returns the mean value of all entries. The data foundation of the query looks like: All is grouped by 4 actions per one request, meaning that the first action: has the following values: 80, 45, 71, 63, 458. When i then run the above query i get a correct result with mean: But when switching to e.g. max, i get a maximum value of 80. which seems wrong, however it corresponds to the latest value. What am i missing here?  
Hi All I have followed the instructions under  "https://docs.splunk.com/Documentation/AddOns/released/Tomcat/Recommendedfields" But as you can see on the pic below there are question marks before ... See more...
Hi All I have followed the instructions under  "https://docs.splunk.com/Documentation/AddOns/released/Tomcat/Recommendedfields" But as you can see on the pic below there are question marks before and after the field value. Why is that needed? Thank you in advance for your help.
I'll probably find my solution finally but if someone has something at hand, I'd be grateful for sharing I have some results. Let's say they are like this: Count FieldA FieldB 11 a   ... See more...
I'll probably find my solution finally but if someone has something at hand, I'd be grateful for sharing I have some results. Let's say they are like this: Count FieldA FieldB 11 a   12 b   34 c 1 54 d 1 462 e   0 f 3 12 g 3 4 h 3   I would like the values from the count column summed up but only for the events that have FieldB defined. For the rest, I want them lest split by FieldA. For those summed up I want the FieldA to be aggregated into a multivalue field So effectively the output should be like Count FieldA FieldB 11 a   12 b   88 c d 1 462 e   16 f g h 3   OK. I think I can get it done by adding another column being created conditionally either from fieldA or fieldB, then aggregating by this field. Something like this: <initial search> | eval tempfield=if(isnull(fieldB),"fieldA-".fieldA,"fieldB-".fieldB) | stats sum(count) as count values(fieldA) as fieldA values(fieldB) as fieldB by tempfield | fields - tempfield Any nicer way?
Dears,  We have the deployment server in DMZ zone and indexers are in DRN zone. So windows team is pushing the packages using SCCM to our DMZ deployment servers and we can see those clients in our d... See more...
Dears,  We have the deployment server in DMZ zone and indexers are in DRN zone. So windows team is pushing the packages using SCCM to our DMZ deployment servers and we can see those clients in our deployment servers but we are not seeing single logs in our splunk that means data is not indexing into our splunk.  Please find the attached architecture screenshot for your reference .  More details :  1. Deployment servers in DMZ zone 2. Indexers are in DRN zone    ################# The below one is for Windows DMZ  log sources to windows universal forwarder [root@********local]# cat outputs.conf [tcpout] defaultGroup = xxxx_idx_win_prod indexAndForward = false [indexAndForward] index = false [tcpout:xxxx_idx_win_prod] autoLBVolume = 1048576 server = xxxxsplkwinfrwdr001.xxxxx.xx.xxxx:9997, xxxxsplkwinfrwdr002.xxxxx.xx.xxxx:9997 sslPassword = password clientCert = $SPLUNK_HOME/etc/auth/server.pem autoLBFrequency = 5 useACK = true ######################################## Deployment server configuration :- This will applicable for PROD DRN indexers - Forwarders to indexers  /opt/splunk/etc/deployment-apps/xx-xxxx_xxxx_idx_prod_outputs/local cat outputs.conf [tcpout] defaultGroup = xxxx_idx_prod indexAndForward = false [indexAndForward] index = false [tcpout:xxxx_idx_prod] autoLBVolume = 1048576 server = <all drn indexers ip address mentioned here with 9997 port> sslPassword = password clientCert = $SPLUNK_HOME/etc/auth/server.pem autoLBFrequency = 5 useACK = true   ####################### inputs.conf  cat inputs.conf [splunktcp-ssl:9997] disabled=0 [SSL] sslPassword = password clientCert = $SPLUNK_HOME/etc/auth/server.pem  Kindly advise us on this.   
If you look at the picture I cant see the real time alert option, Could you please assist me to get this on my splunk ?
I created a HEC token call test_app initially for accepting log data from a test app.  That app has morphed into a prod app.  I would like to change the HEC token name to prod_app.  How do I do that?... See more...
I created a HEC token call test_app initially for accepting log data from a test app.  That app has morphed into a prod app.  I would like to change the HEC token name to prod_app.  How do I do that?  Thanks.
hello I try to use a base search between two single panel the first single panel is on the last 24 h and the second panel must be on the last 7 days but when i put  <earliest>-7d@h</earliest><late... See more...
hello I try to use a base search between two single panel the first single panel is on the last 24 h and the second panel must be on the last 7 days but when i put  <earliest>-7d@h</earliest><latest>now</latest> in the second panel I have a validation warning! what i have to do please? <row> <panel> <single> <search id="test"> <query>index=toto sourcetype=tutu | fields signaler | stats dc(signaler)</query> <earliest>-24h@h</earliest> <latest>now</latest> </search> </single> </panel> <panel> <single> <search base="test"> <query>| stats dc(signaler)</query> </search> </single> </panel>  
Hello All, I have a query that searches the Windows Security Logs and shows results in the following format using a stats function .  As you can see , i am grouping connection attempts from multiple... See more...
Hello All, I have a query that searches the Windows Security Logs and shows results in the following format using a stats function .  As you can see , i am grouping connection attempts from multiple users to a particular Dest . Also,  the "Connection Attempts" takes into account the total # for all the users listed under "User" per row.   index=xxx source="WinEventLog:Security" EventCode=4624 | stats values(dest_ip), values(src), values(src_ip),values(user), dc(user) as userCount, count as "Connection Attempts" by dest   Dest Dest_IP Src SRC_IP userCount User  Connection Attempts XX XXXX XXX XXX 3 User A User B User C 9 XX XXXX XXX XXX 2 User D User E 78                 I would like to show how many connection attempts were made by each user.  How to segregate this data per user ?
Hi all, I have a xml file as below. <?xml version="1.0" encoding="UTF-8"?> <suite name="abc" timestamp="20.08.2021 15:47:20" hostname="kkt2si" tests="5" failures="1" errors="1" time="0"> <case nam... See more...
Hi all, I have a xml file as below. <?xml version="1.0" encoding="UTF-8"?> <suite name="abc" timestamp="20.08.2021 15:47:20" hostname="kkt2si" tests="5" failures="1" errors="1" time="0"> <case name="a" time="626" classname="x"> <failure message="failed" /> </case> <case name="b" time="427" classname="x" /> <case name="C" time="616" classname="y" /> <case name="d" time="626" classname="y"> <error message="error" /> </case> <case name="e" time="621" classname="x" /> </suite>   The cases which doesnt have failure or errors are the ones which are passed. I am able to make a list of cases but i am confused how to add a column of the status. Anyone know the solution for this? |spath output=cases path=suite.case{@name}| table cases This is how i extracted the cases. I want to add a column which shows the status. Please suggest some answers.  
We recently upgraded to Splunk Enterprise 8.2.2 and we just had a license expire in a lower environment and never saw an alert.  Upon investigation, it looks like the search for "DMC Alert - Expired ... See more...
We recently upgraded to Splunk Enterprise 8.2.2 and we just had a license expire in a lower environment and never saw an alert.  Upon investigation, it looks like the search for "DMC Alert - Expired and Soon To Expire Licenses" may have an issue.   In the search below, if I update "| where has_valid_license == 0"  to "| where has_valid_license == 1" , it displays the expired alert in the search results.  It doesn't appear this search was changed, and it is the same in all our Monitoring Console instances.  The alert was working last month before we upgraded on 7.2.x.  Has anyone else seen the same thing?       | rest splunk_server_group=dmc_group_license_master /services/licenser/licenses \ | join type=outer group_id splunk_server [ \ rest splunk_server_group=dmc_group_license_master /services/licenser/groups \ | where is_active = 1 \ | rename title AS group_id \ | fields is_active group_id splunk_server] \ | where is_active = 1 \ | eval days_left = floor((expiration_time - now()) / 86400) \ | where NOT (quota = 1048576 OR label == "Splunk Enterprise Reset Warnings" OR label == "Splunk Lite Reset Warnings") \ | eventstats max(eval(if(days_left >= 14, 1, 0))) as has_valid_license by splunk_server \ | where has_valid_license == 0 AND (status == "EXPIRED" OR days_left < 15) \ | eval expiration_status = case(days_left >= 14, days_left." days left", days_left < 14 AND days_left >= 0, "Expires soon: ".days_left." days left", days_left < 0, "Expired") \ | eval total_gb=round(quota/1024/1024/1024,3) \ | fields splunk_server label license_hash type group_id total_gb expiration_time expiration_status \ | convert ctime(expiration_time) \ | rename splunk_server AS Instance label AS "Label" license_hash AS "License Hash" type AS Type group_id AS Group total_gb AS Size expiration_time AS "Expires On" expiration_status AS Status    
I am looking to create a simple dashboard with fruit on the x-axis and amount on the y-axis based on the last event . When I try to list the amount, all the amounts get listed instead of the correspo... See more...
I am looking to create a simple dashboard with fruit on the x-axis and amount on the y-axis based on the last event . When I try to list the amount, all the amounts get listed instead of the corresponding fruit. Any help or documentation is appreciated { "Results": [     {         "Fruit": "Apple",         "amount": 9     },     {         "Fruit": "Orange",         "amount": 37     },     {         "Model": "Cherry",         "Amount": 27     },   ] }
I am working on migrating some items over to dashboard studio. I have a very simple stats command getting a few counts. One item I have is to just get an average response time, avg(responseTime). Whe... See more...
I am working on migrating some items over to dashboard studio. I have a very simple stats command getting a few counts. One item I have is to just get an average response time, avg(responseTime). When I put this into my search the column doesnt get results, other columns like count(eval(status=OK)) populate fine. Also if select to run the item as just a search it works fine and all my data shows. Anyone else have similar issues?
I have been pulling my hair out on this one all day. I have an accelerated data model that has two data sets: hostInfo networkInfo They are stand alone root searches. They do happen to share so... See more...
I have been pulling my hair out on this one all day. I have an accelerated data model that has two data sets: hostInfo networkInfo They are stand alone root searches. They do happen to share some fields like hostname. When running the searches in a normal splunk search window work perfectly fine. Example:       index=summary_host_info search_name="Host_Info" | fields hostname os cpu       However, only the first data set ever returns results from tstats. I've tested and swapped the two around. Example of a simple query I've been using to test:       | tstats count("hostInfo.hostname") FROM datamodel="endpoint_info" WHERE nodename="hostInfo"        There are no required fields, permissions seem fine and the data model summary is 10% built at around 1gb. I can even recreate the same data set and use that as the second one and that second identical data set will not return results.   Edit: I finally found a warning after clicking on "Datasets" at the top and clicking into one specifically:   Issue occurred with data model 'test.s3jaytest'. Issue: 'Failed to generate dmid' Reason: 'Error in 'DataModelCache': Invalid or unaccelerable root object for datamodel'. Failed to parse options. Clearing out read-summary arguments.   What does this mean and how do I fix it? I'm using root searches, not root events.  
I have a lookup sample.csv as follows whereas one of the host value is empty    Name  Host TEST_USER abc, def USER_1 * user_3 ghi   Now I use the lookup in a search. Now for th... See more...
I have a lookup sample.csv as follows whereas one of the host value is empty    Name  Host TEST_USER abc, def USER_1 * user_3 ghi   Now I use the lookup in a search. Now for the USER_1 Host I want to use the wild card. Using astrick symbol directly in the lookup doesn't working. Is there any way I can add a wild card for USER_1.  A little research on the Splunk docs gives me some inputs like I need to use props and transforms to do so. I don't have a props or transforms exists for that application. Can I create a condition in props, transforms just for the above purpose. If so what should be the stanzas should be in both the configuration files.    Any Help would be great.