All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I am attempting to register my veteran status to my account however I am getting an error that says my ID.me account is already in use. I did start a new registration process but never added an emai... See more...
I am attempting to register my veteran status to my account however I am getting an error that says my ID.me account is already in use. I did start a new registration process but never added an email bc I already have an account. Where can I find help with this?
I am ingesting Qualys data via the Qualys Technology Add-on for Splunk (v1.8.7). To reduce daily volume, I have chosen to ingest incrementally (i.e., do not perform a full backup/pull every day). For... See more...
I am ingesting Qualys data via the Qualys Technology Add-on for Splunk (v1.8.7). To reduce daily volume, I have chosen to ingest incrementally (i.e., do not perform a full backup/pull every day). Fortunately, the TA tags events appropriate to leverage the Vulnerabilities data model. Unfortunately, since I'm only receiving incremental updates every day, I need to perform a dedup after querying the accelerated data model in order to get the current/latest status for each vulnerability. For example, most of my searches begin like this: | tstats count from datamodel=Vulnerabilities where index=qualys by _time, Vulnerabilities.dest, Vulnerabilities.dest_id, Vulnerabilities.dest_host, Vulnerabilities.severity_id, Vulnerabilities.severity, Vulnerabilities.signature_id, Vulnerabilities.status, Vulnerabilities.type span=1s | fields - count | dedup Vulnerabilities.signature_id Vulnerabilities.dest_id sortby -_time The tstats command targets the accelerated data model, and separates each event by vulnerability and host. The fields command remove the count, since it's not used. The dedup command removes all events, except for the most recent, on signature_id (QID) and dest_id (HOST_ID). This works well when filtering individual hosts or QIDs in the tstats where clause, but if I wanted to gather a count of all active, confirmed vulnerabilities by severity and host, The search is impossibly long because it first puts all results into memory before attempting a dedup. Ideally, part of running the data model acceleration would be able to update the summary with only the latest status, so all that is needed is running tstats against the data model without a dedup. Unfortunately, this doesn't appear to be possible. Anyone have any ideas on how to deal with this issue?
I have set up a Cloud Storage Bucket input using the Splunk Add-on for Google Cloud Platform. I do not see a way to easily configure the sourcetype.  I added a sourcetype line in google_cloud_storag... See more...
I have set up a Cloud Storage Bucket input using the Splunk Add-on for Google Cloud Platform. I do not see a way to easily configure the sourcetype.  I added a sourcetype line in google_cloud_storage_buckets.conf, but when restarting Splunk it showed "Invalid key in stanza [______] in /opt/splunk/etc/apps/Splunk_TA_google-cloudplatform/local/google_cloud_storage_buckets.conf, line 8: sourcetype (value: ______)"   The only way I was able to get my desired sourcetype set was to edit the python script which is setting it (which is not a long-term solution as I may need to add more inputs of different sourcetypes later) Even then, after updating the script to set the desired sourcetype, the config I set up for this sourcetype (regarding event breaking) is not working - even though it does work properly when adding the data from a local copy of the file in the GCP bucket.  Is there anything the script is doing to the files from the bucket that is mucking with their format.... or a reason why the settings in props.conf for the sourcetype are not being applied to the files being pulled down from the GCP bucket?
I have two timestamps that are in this format within my log events: start: 2005-07-05T04:28:34.453494Z end: 2005-07-05T05:28:39.462681Z I would like to subtract the timestamps then evaluate them t... See more...
I have two timestamps that are in this format within my log events: start: 2005-07-05T04:28:34.453494Z end: 2005-07-05T05:28:39.462681Z I would like to subtract the timestamps then evaluate them to see if the result is greater or equal to 1 hour and include that event in my query results Extra: if so how long?
In Module 5 Lab #8, I am asked to perform a search using the "fail* AND password" command over ALL TIME. The search returned "NO results found" in the previous step I performed the search "error or f... See more...
In Module 5 Lab #8, I am asked to perform a search using the "fail* AND password" command over ALL TIME. The search returned "NO results found" in the previous step I performed the search "error or fail" over ALL TIME. and that returned the response that was expected. Why is the task from #8 not returning the expected results?
Hi all, A 3rd party has installed Splunk SE and it hasn't been fully configured. I'm looking at the Basic Malware Outbreak and it references symantec, how do I amend this to include our antivirus?
I am working on a Splunk Add-on that was developed on stand alone Splunk that I am now testing in a search head cluster environment. The Add-on uses a modular script that sends Splunk search data or ... See more...
I am working on a Splunk Add-on that was developed on stand alone Splunk that I am now testing in a search head cluster environment. The Add-on uses a modular script that sends Splunk search data or notable event data to a 3rd party app when a saved search is matched. Our Add-on ships 2 saved searches for users to use to verify that the add-on is configured properly with the 3rd party app. On stand alone Splunk (with Splunk ES also installed) both of these saved searches are triggered when there is a failed Splunk server login that is logged in splunkd: index=_internal sourcetype=splunkd ERROR UiAuth                                            (for Splunk without Splunk ES) index=_internal sourcetype=splunkd ERROR UiAuth | `get_event_id`        (for Splunk ES) In the SHC environment, when there is a failed login, we are only seeing one of the two rules triggered (it is not consistent which one is triggered). Can anyone explain why only one rule would be triggered in SHC environment? I’m trying to figure out if the SHC is not setup properly, if there is something the Add-on is not doing properly or if there is some other issue? We are currently testing on Splunk 8.1.0 and Splunk ES 6.1.1, CIM 4.15.0. Do you think upgrading Splunk to 8.2 would make any difference? Thanks for any help! AnnMarie
Hi team, I already worked with the lookup feature of splunk, tables, definitions and automatic lookup, and is working correctly even though I create a script to use the inputlook command to automati... See more...
Hi team, I already worked with the lookup feature of splunk, tables, definitions and automatic lookup, and is working correctly even though I create a script to use the inputlook command to automatically update the lookup table when it is needed. The csv file of the lookup table have the following structure:     appid,appName APP01729-af-ws.service,APP01729 APP01729-af-sch.service,APP01729 APP01729-af-wkr.service,APP01729     The idea with this lookup is to match the appid with one of the attributes that splunk have from a seach and then add the value of appName in the result of that search, for example: appid will match the values of systemd_unit  with that match in that search will add the attribute appname with the value of appName of the lookup table  That behavior is working with the values above, but when I try to create another lookup table and his definition with different values but matching the same attributes in splunk is not creating the new attribute in the search. I test that with this search:     index=main_dev ... | spath systemd_unit | search systemd_unit="*container*" | lookup appids_lookup appid as systemd_unit OUTPUTNEW appName     Here the systemd_unit that try to match is everything that have 'container' in his name and then create a new attribute called appName with the value corresponding to the value of appName in the lookup table That doesn't work because the search for container and the corresponding lookup value in the lookup table is new. But the old values of the lookup table, I mean old values with values from other lookup tables that I use in the new lookup table it works correctly, creating the new attribute in the seach. My problem is do I need something else to do more than creating the lookup table, definition to make this works for new values?
After updating our TA we realized the action field autolookup wasn't working anymore. Digging through the TA I see in the props.conf the autolookup "LOOKUP-estreamer_fw_action" is commented out. Is t... See more...
After updating our TA we realized the action field autolookup wasn't working anymore. Digging through the TA I see in the props.conf the autolookup "LOOKUP-estreamer_fw_action" is commented out. Is there a reason this was done?   @douglashurd - Can you please advise. Thanks!
Hi, i have been looking but cant seem to make much sense of it all. im new to splunk. im trying to create a search and alert from a csv file, the csv fiel contains Domain Admin account and i wanted ... See more...
Hi, i have been looking but cant seem to make much sense of it all. im new to splunk. im trying to create a search and alert from a csv file, the csv fiel contains Domain Admin account and i wanted to creat a search for a numbers of eventid on those domain admin accounts. index=win sourcetype=wineventlog EventCode=*the events im looking for* | inputlookup file.csv   but cant seem to make it work. any help would be great
We've got an alert set up on the Monitoring Console to let us know when a machine is down, but sometimes--such as for hardware maintenance--we know that machine is going to be taken out for some amou... See more...
We've got an alert set up on the Monitoring Console to let us know when a machine is down, but sometimes--such as for hardware maintenance--we know that machine is going to be taken out for some amount of time and we don't want the alert to fire during that time. What I'd like to do is create a KVStore to track these servers so that we could run a "remove from service" script to add the server to this KVStore. Then I could modify the alert to just ignore any results that are in that lookup. (The script would also handle things like taking offline a clustered indexer.) I don't have much (any) experience around doing this, so I'd appreciate any help. My impression is that since the alert is on the MonCon, I'd want to stick the KVStore there as well.   # SPLUNK_HOME/etc/apps/splunk_monitoring_console/local/collections.conf [server_down] enforceTypes = true fields.timestamp = time fields.servername = string fields.note = string   I think I remember reading that you can set the timestamp to auto-fill on adding to the KVStore, but I can't remember how. Servername is the server, and the "note" field I'm planning to use to track information like why it was taken down. I figure that way if a server is added to the KVStore because there's a bad hard drive, for example, and then is added again because a group of servers that it's part of is undergoing upgrades, I can make sure to remove the upgrade item once the upgrades are done while still making sure it stays out of service while it needs hardware maintenance. Do I necessarily need a transforms.conf entry? Or will the KVStore defined in collections.conf provide everything I need? If I want to test what's in my KVStore, I should be able to use this, correct?   | outputlookup server_down     And if I want to test adding something to the KVStore, would this also work?   servername="myserver" note="hardware maintenance" | inputlookup server_down     And to verify my REST queries, they should look roughly like this, correct?   # Show servers in KVStore curl -k -u admin:yourpassword \ https://<monitoringconsole>:8089/servicesNS/nobody/splunk_monitoring_console/storage/collections/data/server_down # Add a server curl -k -u admin:password \ https://<monitoringconsole>:8089/servicesNS/nobody/splunk_monitoring_console/storage/collections/data/server_down/ \ -H 'Content-Type: application/json' \ -d '{"servername": "myserver", "note": "hardware maintenance"}' # Show all servers under hardware maintenance curl -k -u admin:yourpassword \ https://<monitoringconsole>:8089/servicesNS/nobody/splunk_monitoring_console/storage/collections/data/server_down?note=hardware%20maintenance # Remove a server ???   Not really sure on removing a server. I think I need to somehow get the key, but I'm not sure the best way to do that. With the timestamp maybe? Thanks for any help! I've tried to include what I think I know from my reading and searching so far, but I know there's a bit more to hammer out still and this seems to be the best place to go.
Hi all, can anyone confirm the behaviour? when running: | rest /services/data/indexes | table title *datatype* I'm only getting back event indexes. From the documentation : https://docs.splunk.... See more...
Hi all, can anyone confirm the behaviour? when running: | rest /services/data/indexes | table title *datatype* I'm only getting back event indexes. From the documentation : https://docs.splunk.com/Documentation/Splunk/8.0.0/RESTREF/RESTintrospect datatype = The type of index (event | metric). i would expect to get all indexes back with datatype set.  I've tested with v8.0.7 and v8.2.0. Looks like a bug? What would be alternatives to determine the type of an index programatically from outside using the API? best regards, Andreas  
Hi, I have one sourcetypeA which has following fields Cluster1 UsageA A 10 B 15 and so on then I have one sourcetypeB which has fields: Cluster 2 Usage B C 5 D 20 ... See more...
Hi, I have one sourcetypeA which has following fields Cluster1 UsageA A 10 B 15 and so on then I have one sourcetypeB which has fields: Cluster 2 Usage B C 5 D 20 I want to make a Splunk syntax so that I combine both sourcetypes to see top 3 usage in total among all clusters. so like this: Cluster Usage D 20 B 15 A 10 So, neither Cluster 2, cluster 1 nor usage A  and Usage B are common fields. Is this possible to do?
I source database that displays all of the info i need that is separated by colon. Example "ilruPartNumber":"12345"., "lruSoftwareVersion":"7.10.0.74". All of the info i need is separated by an " : "... See more...
I source database that displays all of the info i need that is separated by colon. Example "ilruPartNumber":"12345"., "lruSoftwareVersion":"7.10.0.74". All of the info i need is separated by an " : " What i want is to separate the search to list the Name then Number. Example ilruPartNumber = 12345.
Hi All, Good Day!! This is regarding of gluster FS and PostgreSQL DB in Phantom. which we have a cluster with full of components (3 Nodes- 1 Master, 2 nodes &Splunk embedded, gluster FS and Postgre... See more...
Hi All, Good Day!! This is regarding of gluster FS and PostgreSQL DB in Phantom. which we have a cluster with full of components (3 Nodes- 1 Master, 2 nodes &Splunk embedded, gluster FS and PostgreSQL DB). Our query is can we migrate gluster FS and PostgreSQL DB in to a single server as we have cost constraints and can be able to have only single sever for both BD and FS. Will that be possible? is there any issues or challenges will arise, if we do that?. Do you have any sequence of steps to do that. Kindly help us with the details for the above. Regards, Yeswanth M. 
Hey there,  i have wrote a custom command in order to execute whois querys using an internal whois server, which expects csv files and returns json files containing the results. The CSVs are sent ... See more...
Hey there,  i have wrote a custom command in order to execute whois querys using an internal whois server, which expects csv files and returns json files containing the results. The CSVs are sent using HTTP posts. I have used pythons request module for this. As the company policy leaves no room for internet connections, i have manually imported the module. If i remove the http request (call of the getWhoisInfo), the code works perfectly fine and writes "Test" to each event into the newly generated column. As soon as i execute the function and hence make the http post, there seems to be some kind of conflict with splunk. It results with the error  "The external search command 'whois' did not return events in descending time order, as expected." Has anyone an idea or faced similar issues? Thanks a lot in Advance     #!/usr/bin/env python import sys import os import csv import pathlib import json script_path = os.path.realpath(__file__) sys.path.append(os.path.join(script_path,"requests-2.25.1")) sys.path.append(os.path.join(script_path,"splunklib")) import requests from splunklib.searchcommands import dispatch, EventingCommand, Configuration, Option, validators csv_header = "Query" def createCSV(header,data): with open(os.path.join(pathlib.Path(__file__).parent.absolute(),"..","tmp","whois_temp.csv"), "w+", newline='\n') as tmpcsv: wr = csv.writer(tmpcsv, quoting=csv.QUOTE_NONE) wr.writerow([csv_header]) wr.writerows(data) return os.path.join(pathlib.Path(__file__).parent.absolute(),"..","tmp","whois_temp.csv") def getWhoisInfo(csv_file): response = requests.post("http://[TargetHost]:[TargetPort]/whois", files={'file': open(csv_file, "r")},timeout=2) print(response.status_code) if response.status_code == 200: return response.text else: return None @Configuration() class whoisCommand(EventingCommand): """ %(synopsis) ##Syntax %(syntax) ##Description %(description) """ fieldname = Option( doc=''' **Syntax:** **fieldname=***<fieldname>* **Description:** Name of the field that will hold the whois results''', require=True, validate=validators.Fieldname()) def transform(self, records): rec = list(records) data = [[record["hosts"]] for record in rec if record["hosts"] != ""] tmp_csv = createCSV(csv_header, data) whois_response = getWhoisInfo(tmp_csv) for record in rec: record[self.fieldname] = "TEST" return rec if __name__ == "__main__": dispatch(whoisCommand, sys.argv, sys.stdin, sys.stdout, __name__)     PS: setting overrides_timeorder to true will not help
A scheduled search is hanging when it approaches around 28% completion. In search.log, the following message appears shortly before the search begins to experience issues 07-06-2021 07:21:30.846 WAR... See more...
A scheduled search is hanging when it approaches around 28% completion. In search.log, the following message appears shortly before the search begins to experience issues 07-06-2021 07:21:30.846 WARN SearchResultCollator - Collector xxx-xxxx-xxxx produced chunk with startTime 1625255401.000000 when our cursor time was already 1625255234.000000, time ordering has failed! 07-06-2021 07:21:31.273 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 07-06-2021 07:21:31.284 INFO SearchParser - PARSING: noop 07-06-2021 07:21:31.284 INFO DispatchExecutor - BEGIN OPEN: Processor=noop 07-06-2021 07:21:31.284 INFO SearchEvaluator - Searched for keyword in results without _raw field 07-06-2021 07:21:31.284 INFO DispatchExecutor - END OPEN: Processor=noop 07-06-2021 07:21:31.284 INFO PreviewExecutor - Finished preview generation in 0.000866621 seconds. 07-06-2021 07:21:32.373 INFO ReducePhaseExecutor - ReducePhaseExecutor=1 action=PREVIEW 07-06-2021 07:21:43.188 INFO TcpOutbound - Received unexpected socket close condition with unprocessed data in RX buffer. Processing remaining bytes=11014 of data in RX buffer. socket_status="Connection closed by peer" paused=1 Appears to be related to this issue, but I don't see any additional log entries mentioning the ulimit value. 
Hi, Given the below system architecture on a single server:   1. When I pass the OS data generated by the Splunk addon (Splunk App for Unix and Linux) through the universal forwarder to Splunk... See more...
Hi, Given the below system architecture on a single server:   1. When I pass the OS data generated by the Splunk addon (Splunk App for Unix and Linux) through the universal forwarder to Splunk single instance. I get fields like UsedBytes, PercentMemory, pctCPU,.. as below:   2. But when I pass the OS data generated by the Splunk addon (Splunk App for Unix and Linux) through the universal forwarder to Cribl, then from Cribl to Splunk single instance.  These fields are not computed as below:   As per my understanding, these extra fields are computed with the help of the props.conf file in the path /opt/SP/splunk/splunkforwarder/etc/apps/Splunk_TA_nix/default. But i don't get why this file is not taking effect or why the fields are not getting calculated when passed from UF to Cribl to Splunk.   Any idea how to pass the data from universal forwarder to Cribl then to Splunk(path no. 2) and get the extra fields to be calculated.    Best Regards, Noura Ali    
  Hi, I am trying to configure the Fortigate App for splunk, the first dashboard is working fine but no traffic, event dashboard showing empty  if any one know how to setup and populate the log... See more...
  Hi, I am trying to configure the Fortigate App for splunk, the first dashboard is working fine but no traffic, event dashboard showing empty  if any one know how to setup and populate the logs pls help me.   Above is image current source type and index name is firewall    But Below Dashboard is empty      
Hello   I have splunk getting data from a folder everyday. Recently the files changed the name of the fields. Here is a sample there are 44 fields in total, Old New Number number Correlation... See more...
Hello   I have splunk getting data from a folder everyday. Recently the files changed the name of the fields. Here is a sample there are 44 fields in total, Old New Number number Correlation ID correlation_id Opened opened_at Priority priority Category category Site u_customer_site Domain u_domain Nature u_nature   I was wondering if there is anyway i can make this change without needing to add to every single dashboard the | rename as 44 times.