All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi everyone,   basically I am trying to count how many unique customers I had in a period and that worked well with dc(clientid). Until I wanted to know "How many unique Customers do I have by ... See more...
Hi everyone,   basically I am trying to count how many unique customers I had in a period and that worked well with dc(clientid). Until I wanted to know "How many unique Customers do I have by product group"           |stats dc(clientid) as Customers by year quarter product_toplevel product_lowerlevel           which, when for example looking at only 1 specific client who purchased 3 products within one group can return entries like year quarter product_toplevel product_lowerlevel Customers 2022 1 Group A Article 1 1 2022 1  Group A Article 2 1 2022 1  Group A  Article 3 1   Now Splunk is not wrong here. Looking at each product there is 1 unique customer. However if I were to sum up the Customers-column it would look like I have 3 Customers in total. I would much rather have it return the value "0.3" for "Customers" in the above example so that I can export the table to excel and work with pivot-tables while retaining the 'correct' total of unique customers. For that purpose I intend to use eventstats to create a sort of column-total and then divide the value of Customers by that column total. But I can not figure out how to do it. Any ideas?
Hello, I would like to request an improvement in the official unix TA : adding a fields.conf with the following content : [action] INDEXED = false INDEXED_VALUE = false Indeed for some events... See more...
Hello, I would like to request an improvement in the official unix TA : adding a fields.conf with the following content : [action] INDEXED = false INDEXED_VALUE = false Indeed for some events, the value filled in the field action doesn't exist in the indexed event, so the search can't find the events. See the following topic  Example : - command launched : useradd splunky - event received in Splunk (syslog) : Jul  8 07:21:09 host useradd[4450]: new user: name=splunky, UID=1001, GID=1001, home=/home/splunky, shell=/bin/bash - applied props.conf : REPORT-account_management_for_syslog = useradd, [...] - applied transforms.conf :  ## Account Management [useradd] REGEX = (useradd).*?(?:new (?:user|account))(?:: | (?:added) - )(?:name|account)=([^\,]+),(?:\s)(?:(?:UID|uid)=(\w+),)?(?:\s)(?:(?:GID|gid)=(\w+),)?(?:\s)*(?:home=((?:\/[^\/ ]*)+\/?),)?(?:.*uid=(\d+))? FORMAT = vendor_action::"added" action::"created" command::$1 object_category::"user" user::$2 change_type::"AAA" object_id::$3 object_path::$5 status::"success" object_attrs::$4 src_user_id::$6 - Splunk query : index=[...] action="created"  --> No result. With the proposed fields.conf, the event is found and displayed in the results
HI, I want to disable multiple alerts/reports using curl (TA-webtools)..so basically my results look like below- title app id report1  app1 https://abc.com:8089/servicesNS/nob... See more...
HI, I want to disable multiple alerts/reports using curl (TA-webtools)..so basically my results look like below- title app id report1  app1 https://abc.com:8089/servicesNS/nobody/app1/saved/searches/report1 report2  app2 https://abc.com:8089/servicesNS/nobody/app2/saved/searches/report2 report3  app3 https://abc.com:8089/servicesNS/nobody/app3/saved/searches/report3   How I can disable all id alert/reports in single query? any help is appreciated! @jkat54 
Could not load JSON from CEF parameter: Error Code: Error code unavailable. Error Message: Expecting ',' delimiter: line 5 column 1 (char 97). It is failed action for phantom-asset. In phantom asset... See more...
Could not load JSON from CEF parameter: Error Code: Error code unavailable. Error Message: Expecting ',' delimiter: line 5 column 1 (char 97). It is failed action for phantom-asset. In phantom asset config we don't have delimiter block. Could you anyone please help me with it I'm new to Phantom.
Hi, I have a set up where an UF is sending data into HF. From HF, the data is supposedly to be sent to two different indexers with different indexes. For example, indexer01 receives the data with ind... See more...
Hi, I have a set up where an UF is sending data into HF. From HF, the data is supposedly to be sent to two different indexers with different indexes. For example, indexer01 receives the data with indexA while indexer02 receives the same data with indexB.  This is what I have tried so far, but not working.  However the data flow is correct and sending the same data to both indexers with the predefined indexA from UF. inputs.conf (UF) [monitor:///home/name/samplelogs] disabled = false index = indexA sourcetype = sourcetypeA inputs.conf (HF) [splunktcp://9997] outputs.conf (HF) [tcpout] defaultGroup = indexer01, indexer02 [tcpout:indexer01] server=indexer01_IP [tcpout:indexer02] server=indexer02_IP inputs.conf (indexer02) [splunktcp://9997] index=indexA queue=parsingQueue props.conf (indexer02) [sourcetypeA] (or) [host::UF_hostname] (or) [source::/home/name/samplelogs] TRANSFORMS-index = overrideindex transforms.conf (indexer02) [overrideindex] DEST_KEY =_MetaData:Index REGEX = . FORMAT = indexB  Any help would be appreciated!  Thanks!
Is MongoDB compacting of indexes to save space after data is deleted a built-in option in Splunk 9?  Previous posts indicated it was not possible, and it appeared the reason was due to WiredTiger not... See more...
Is MongoDB compacting of indexes to save space after data is deleted a built-in option in Splunk 9?  Previous posts indicated it was not possible, and it appeared the reason was due to WiredTiger not being the standard storage engine under the hood.  Now that WiredTiger is required in Splunk 9, is support for 'compact' a standard feature in Splunk 9? Need an elegant/supported way to free up disk space after deleting data. When you realize that something has flooded your indexes and filled up disk space, what's the supported option to get the individual events out of the index and reclaim the disk space without losing the rest of the data in the index that is wanted?   emphasis on reclaiming/freeing the disk space.  Is the only option to let the buckets age out to frozen? Thanks.
I have a query that must search 9 weeks of data, and then applies a filter against a single field (dv_opened_at) looking for specific events that occurred within an 8 week period.  Initial 9 week sea... See more...
I have a query that must search 9 weeks of data, and then applies a filter against a single field (dv_opened_at) looking for specific events that occurred within an 8 week period.  Initial 9 week search is necessary to catch events that were modified after the end of the last week, yet had a dv_modified_at time within the last 8 weeks.  query     index=cmdb (dv_number=* OR number=*) dv_state=* dv_assigned_to[| inputlookup cmdb_users.csv| table dv_assigned_to ] earliest=-8w@w latest=now() | table _time number dv_number dv_opened_at dv_assigned_to dv_short_description dv_watch_list dv_sys_updated_on dv_state close_notes | dedup number | eval dv_opened_at=strptime(dv_opened_at,"%Y-%m-%d %H:%M:%S") | where dv_opened_at>=relative_time(now(), "-8w@w") AND dv_opened_at<=relative_time(now(), "@w") | eval _time=dv_opened_at | bin _time span=1w | eval weeknumber=strftime(_time,"%U") | rename dv_assigned_to AS Analyst | timechart limit=0 useother=false span=1w count BY Analyst     The problem is, the timechart outputs 9 weeks of data, and as expected the last week is all 0's.  How do I eliminate the current week from the output, but keep the current week in the initial query? Output     _time Analyst1 Analyst2 Analyst3 Analyst4 2022-05-08 19 6 0 0 2022-05-15 5 4 0 0 2022-05-22 8 2 0 1 2022-05-29 7 4 0 0 2022-06-05 1 3 1 39 2022-06-12 7 1 4 51 2022-06-19 3 2 0 59 2022-06-26 25 5 2 26 2022-07-03 0 0 0 0 #how to drop this row each weekly report    
Our login page is developed by team1 and the main home page (After login) is developed by team2. The event logs from each use completely different structures. I strongly suspect unique system identif... See more...
Our login page is developed by team1 and the main home page (After login) is developed by team2. The event logs from each use completely different structures. I strongly suspect unique system identifiers in the login logs may be carried into the home page logs, but I don't know which fields (out of 20-50 fields in each log) may contain similar values.  Is there a method to find fields that have the same value in both sources if I don't know which fields to match on?  (index=A sourcetype="login" colA="apple", colB="ABC123" , colC="purple") (index=B sourcetype="home" field1="yellow", field2="orange", ..., field20="ABC123", field21="Monkey") How can I search both sources to identify ( login.colB == home.field20) if I don't know in advance those fields match? I may not find ANY common values...
We have a Syslog server collecting data from Meraki Wireless devices.  There is a UF installed on the Syslog server sending data to Splunk.  I have been trying to use Blacklist to filter out the ICMP... See more...
We have a Syslog server collecting data from Meraki Wireless devices.  There is a UF installed on the Syslog server sending data to Splunk.  I have been trying to use Blacklist to filter out the ICMP protocol events which we don't need and I have been unable to drop them.  The entry in my inputs.conf file for this are: [monitor:///syslog0/syslog/meraki/*/*.log] disabled=0 host_segment = 4 blacklist1 = protocol=icmp blacklist2 = "(?192.\168.\30.\143.)" blacklist3 = 10.\12.\239.\7 index = network sourcetype = meraki I have tried a number of variations and have been unable to get the "protocol=icmp" to drop.  Is there something obvious that I am missing? Thanks in advance for any suggestions.
| eval RouteLatency = if (Name="ABC" AND HTTP="*https://.net.*.com*" , bckLatency ,RouteLatency )
Good afternoon. We currently have our Splunk cloud receiving logs from firewall, office 365 and azure. And now we want to send windows logs, but when installing the universal forwarder on the windows... See more...
Good afternoon. We currently have our Splunk cloud receiving logs from firewall, office 365 and azure. And now we want to send windows logs, but when installing the universal forwarder on the windows machine, we are not able to view the logs in the splunk cloud. We can see that the collector is receiving the logs, but we don't see it in the splunk cloud. According to the images, the host is correct, but we are not receiving the correct index logs. Could someone please help?
I have a search that joins an index to a .csv lookup.  When I run the search for last 24 hours in the GUI, I get ~81k matches (expected).  When I run the exact same query via the sdk, I get 0 matches... See more...
I have a search that joins an index to a .csv lookup.  When I run the search for last 24 hours in the GUI, I get ~81k matches (expected).  When I run the exact same query via the sdk, I get 0 matches.  Here is my code:   service = client.connect( host=HOST, port=PORT, username=USERNAME, password=PASSWORD) import sys from time import sleep import splunklib.results as results query= "search index=my_index sourcetype=my_sourcetype | fields field1 field2 field3 field4 field5 field6 field7 | join my_primary_key[| inputlookup my_lookup_file.csv ]" kwargs = {"exec_mode": "normal", "earliest_time": "-1440m", "latest_time": "now", "search_mode": "normal", "output_mode": "json" } job = service.jobs.create(query, **kwargs) # A normal search returns the job's SID right away, so we need to poll for completion while True: while not job.is_ready(): pass stats = {"isDone": job["isDone"], "doneProgress": float(job["doneProgress"])*100, "scanCount": int(job["scanCount"]), "eventCount": int(job["eventCount"]), "resultCount": int(job["resultCount"])} status = ("\r%(doneProgress)03.1f%% %(scanCount)d scanned " "%(eventCount)d matched %(resultCount)d results") % stats sys.stdout.write(status) sys.stdout.flush() if stats["isDone"] == "1": sys.stdout.write("\n\nDone!\n\n") break sleep(2) # Get the results and display them for result in results.JSONResultsReader(job.results(output_mode='json')): print(result) job.cancel() sys.stdout.write('\n')   Can somebody please explain why the query would work and return matches in the GUI but not via the SDK?
How do I list machines that do not match my search? "if" my script runs, a message is sent to splunk. The script runs once a week. I can easily see the details of my scripts, if it runs in splunk.... See more...
How do I list machines that do not match my search? "if" my script runs, a message is sent to splunk. The script runs once a week. I can easily see the details of my scripts, if it runs in splunk. So how do I list the machines that this script doesn't even start on? eg no entry sent to splunk. This search does not list those who count is "zero", how do I list the "zero" machines?     "MyAppResults" | stats count by host | stats sum(count) as count by host   If I understood I should be using  "inputlookup hosts.csv" but I'm not sure how to use it properly. I still cannot get it to list "zero" machines.
Hello, I am trying to get data in using Splunk rest API feature of Splunk add-on builder, however I am not able to get the results using POST method.    Does anyone know what is the correct syn... See more...
Hello, I am trying to get data in using Splunk rest API feature of Splunk add-on builder, however I am not able to get the results using POST method.    Does anyone know what is the correct syntax to pass JSON query in REST request body? I tried using "data","payload","raw" as Name and in Value I have put the JSON query but it's not working.  I keep getting The response status=500 for request .... PS: I have used postman to validate my request body and it works fine and return results. however, I am not able to do that using Splunk rest API. Not sure if I am missing something.  
Team I need help with a Scatter Plot Visualization. Here's the Search I'm using:  | inputlookup append=t 07012022KPI_Formatted.csv | eval Deployed=strptime(Deployed, "%m/%d/%y") | fieldformat ... See more...
Team I need help with a Scatter Plot Visualization. Here's the Search I'm using:  | inputlookup append=t 07012022KPI_Formatted.csv | eval Deployed=strptime(Deployed, "%m/%d/%y") | fieldformat Deployed = strftime(Deployed, "%m/%d/%y") | table "Deployment Success" Deployed "Deploy Lead Time" It populates the Statistics I'm looking for: However, when I click Visualization, the Deployed axys is not showing the Dates but as numbers 0..... Screenshot for reference of my imported CSV. Any ideas how to fix it? Thank you LK
Hello,  I am using the Splunk enterprise free trial. I want to add another admin. I am on the local host, so how would the other user (happens to be in another state) access their account? Could th... See more...
Hello,  I am using the Splunk enterprise free trial. I want to add another admin. I am on the local host, so how would the other user (happens to be in another state) access their account? Could they just download the free trial and log in with the credentials I gave them for the account?  Or is it more complicated than that?
Hello Splunkers!! We are upgrading one of our environments from Splunk 8.2.1 to Splunk 8.2.7. When I upgraded and checked 'Monitoring Console', 'Summary' and 'Health Check' are not showing on the m... See more...
Hello Splunkers!! We are upgrading one of our environments from Splunk 8.2.1 to Splunk 8.2.7. When I upgraded and checked 'Monitoring Console', 'Summary' and 'Health Check' are not showing on the menu bar anymore.  Did I miss anything? How do I fix these issues?  I appreciate your work will be provided.    PS. same issue when we upgraded to 8.2.2.2
After upgrade to 9.0 seeing following ERROR TcpOutputQ [<thread id> TcpOutEloop] - Unexpected event id=<eventid>
Hello, When I try to test TAs locally, on my single instance, the Inputs page fails to load with a 404 and the message: This is normal on Splunk search heads as they do not require an Input page... See more...
Hello, When I try to test TAs locally, on my single instance, the Inputs page fails to load with a 404 and the message: This is normal on Splunk search heads as they do not require an Input page. Check your installation or return to the configuration page. I get that search heads should not run TA's in a distributed architecture, but this is a single instance. Anyone know how to get around this?  Hoping to find a solution that can be applied globally. Thank you.
So I'm trying to extract a field called "secureToken=tokenvalue" from our akamai logs. However when I try to extract the field, it gives me the following error message: The extraction failed. If yo... See more...
So I'm trying to extract a field called "secureToken=tokenvalue" from our akamai logs. However when I try to extract the field, it gives me the following error message: The extraction failed. If you are extracting multiple fields, try removing one or more fields. Start with extractions that are embedded within longer text strings. I have attempted to manually edit the regex, but I have not a lot of experience with regex so any help would be greatly appreciated  Thanks