All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, I have a list of few hundered servers in csv file like in the picture below.     I am trying to import servers as entities in ITSI. When I do csv import I get  below : Any ... See more...
Hello, I have a list of few hundered servers in csv file like in the picture below.     I am trying to import servers as entities in ITSI. When I do csv import I get  below : Any Idea why is not uploading correct? ITSI we are using is 4.11.4
I am looking for a technical understanding for detecting a "Univariate Categorical Outlier". I have used the ML Toolkit on Splunk and basically, I am trying to detect the "rare" categories which are... See more...
I am looking for a technical understanding for detecting a "Univariate Categorical Outlier". I have used the ML Toolkit on Splunk and basically, I am trying to detect the "rare" categories which are really having low frequencies for the given variable of the dataset.  I have also followed the thread here but I couldn't find the information I am looking for. Tough I could see the links like this which discuss different methods like histogram, IQR, and ZScore for anomaly detection but couldn't find any technical overview. If anyone could help me with finding the "rare" category automatically, it will be a huge help. Because setting a static threshold like 0.05 doesn't work for all datasets. There has to be some way around like the histogram method. Please give me the sources on how splunk finds the rare categories. It is fine if you can provide me with the univariate variable only instead of the multivariate. Thanks
Hi There,    I would like to export the results of kv lookup file in a lookup editor, but the results after exporting is only 50k records, even the original results is 80k records, How to download ... See more...
Hi There,    I would like to export the results of kv lookup file in a lookup editor, but the results after exporting is only 50k records, even the original results is 80k records, How to download the entire results in a single file? Thanks!
Hi all, I'm trying to  make a query  which is not working as expected could you pls help me out in raising an alert. I have a field name first_find value  "2021-06-07T09:04:09.130Z" and last_find... See more...
Hi all, I'm trying to  make a query  which is not working as expected could you pls help me out in raising an alert. I have a field name first_find value  "2021-06-07T09:04:09.130Z" and last_find values "2023-02-15T16:15:52.506Z"are in this format, I believe it is in UTC format, I need a search to make if first_find OR last_find matches with current date the alert should triggered. My SH is set to IST time zone would it make any impact on search ? Do i need to convert the field values  time zone from UTC to IST to get a alert out of it ?
Hey all, Our raw syslogs are showing IP addresses of sourced events, but the results in Splunk is changing the IP addresses to their respective hostnames/FQDNs. If I want to see the results witho... See more...
Hey all, Our raw syslogs are showing IP addresses of sourced events, but the results in Splunk is changing the IP addresses to their respective hostnames/FQDNs. If I want to see the results without the name resolution how can I do this? I just need to see the IP addresses, as per the actual raw syslog. Thanks, Will  
I am looking to get the data in year, month, day, hour, minute and second basis search criteria is index="abc" rex field=raw"few fields" | stats count yearcount by year The above query is giving me... See more...
I am looking to get the data in year, month, day, hour, minute and second basis search criteria is index="abc" rex field=raw"few fields" | stats count yearcount by year The above query is giving me below columns year      yearcount 2023      10 Similar to the above query we want to get count of month, day...seconds the final output should have below table year   yearcount   month   monthcount   day  daycount  ......seconds   secondscount Is it possible to get this output without using appendcols function multiple time as it is making the search query very long and not effective.  
Hi, This work when I use it at search time: | spath path=messageParts{} output=message | mvexpand message | rex field=message "{\"disposition\":\s+\"(?<disposition>[^\"]+)\",\s+\"sha256\":\s+\"(?... See more...
Hi, This work when I use it at search time: | spath path=messageParts{} output=message | mvexpand message | rex field=message "{\"disposition\":\s+\"(?<disposition>[^\"]+)\",\s+\"sha256\":\s+\"(?<sha>[^\"]+)\",\s+\"md5\":\s+\"(?<md5>[^\"]+)\",\s+\"filename\":\s+\"(?<filename>[^\"]+)\",\s+\"sandboxStatus\":\s+\"(?<sandboxStatus>[^\"]+)\",\s+\"oContentType\":\s+\"(?<oContentType>[^\"]+)\",\s+\"contentType\":\s+\"(?<contentType>[^\"]+)\"}" BUT how to put this in props.conf? I have tried MV_ADD = true - but no luck  
I follow up on this doc to generate tokens via API but I didn't receive any response from the server. https://docs.appdynamics.com/appd/4.5.x/en/extend-appdynamics/appdynamics-apis/api-clients#APICl... See more...
I follow up on this doc to generate tokens via API but I didn't receive any response from the server. https://docs.appdynamics.com/appd/4.5.x/en/extend-appdynamics/appdynamics-apis/api-clients#APIClients-GeneratetheTokenThroughAPI the curl like below: curl -X POST -H "Content-Type: application/vnd.appd.cntrl+protobuf;v=1" "https://(accountName).saas.appdynamics.com/controller/api/oauth/access_token" -d 'grant_type=client_credentials&client_id=(username)@(accountName)&client_secret=(clientsecret)' Please help me!
I am trying to get billing data in s3. The data is in parquet format. I tried to get that data with "splunk add-on for aws" app. but i failed. I setting all the source types supported by the app, ... See more...
I am trying to get billing data in s3. The data is in parquet format. I tried to get that data with "splunk add-on for aws" app. but i failed. I setting all the source types supported by the app, the data was not normal. please get me how i can get parquet format data. in a Splunk cloud environment.
Hi Team, working on how to log individual rows in my search result table as individual events in Splunk. Below is a picture of log events and what i'm trying to do with them.  
Hi I am trying to create alerts and dashboards for my o365 and AD logs.  Is there somewhere that has an overview of the different options in for example Operations? Since I dont have a log from w... See more...
Hi I am trying to create alerts and dashboards for my o365 and AD logs.  Is there somewhere that has an overview of the different options in for example Operations? Since I dont have a log from when a user is created, I dont know the value the log will say eg, UserCreated, UserWasCreated, CreateUser. Hope it makes sense  
Hi, I hope that asking this question will not cause controversy. I currently manage a hybrid between Splunk and ELK, some of the sources come directly to Splunk where we pay for the licensing but a... See more...
Hi, I hope that asking this question will not cause controversy. I currently manage a hybrid between Splunk and ELK, some of the sources come directly to Splunk where we pay for the licensing but as there are sources that send very large volumes of information (As we know Splunk is very good but very expensive), we send them to ELK and what we do is that from Splunk we use this type of queries to display data from ELK | ess eaddr="http://localhost:9200" index=paloalto* tsfield="@timestamp" query="src_ip:198.7.62.204" fields="*" now, making clear that these logs are not arriving directly to Splunk, and what Splunk is doing is an external query to ELK I would like to know if it is possible to correlate two sources, in this case I need to correlate the palalto logs type THREAT with the type TRAFFIC This is what I have tried but it does not work   | ess eaddr="http://localhost:9200" index=paloalto* tsfield="@timestamp" query="Type:TRAFFIC AND Threat_ContentType:end AND Action:allow AND NOT SourceLocation:172.16.0.0-172.31.255.255 AND NOT SourceLocation:192.168.0.0-192.168.255.255 AND NOT SourceLocation:Colombia" fields="GeneratedTime,Threat_ContentType,Action,SourceIP,DestinationIP,DestinationPort,NATDestinationIP,SourceLocation,DestinationLocation,SourceZone,DestinationZone" |table GeneratedTime Threat_ContentType Action SourceIP DestinationIP DestinationPort NATDestinationIP SourceLocation DestinationLocation SourceZone DestinationZone * | append [ ess eaddr="http://localhost:9200" index=paloalto* tsfield="@timestamp" query="type:THREAT" fields="threat" |table threat ] |table threat Action *        
Can I overwrite the data I accumulated in Ver.9.0.2 Enterprise to Ver.7.3.3 Enterprise?
I am testing Splunk Cloud DDSS to AWS S3 buckets currently. I see logs in my S3 bucket once an index gets rolled over to S3 after its "Searchable Retention" period ends. The question I have is the lo... See more...
I am testing Splunk Cloud DDSS to AWS S3 buckets currently. I see logs in my S3 bucket once an index gets rolled over to S3 after its "Searchable Retention" period ends. The question I have is the logs that I see in S3 buckets are compressed using ".zst". Is this a configuration from Splunk or AWS - is there a way to change it to "gzip". Can we not have logs in its default extension and gzip it accordingly.   My next step is to test the restore process and it requires a standalone Splunk Enterprise instance. How should I go about that, one indexer and one search head, assuming it will be for one index only?   Thank you
Hello, I am currently running into this issue where I am unable to store / retrieve any data from my storage/passwords endpoint using the splunk sdk for python.  Here is the message I keep receiv... See more...
Hello, I am currently running into this issue where I am unable to store / retrieve any data from my storage/passwords endpoint using the splunk sdk for python.  Here is the message I keep receiving.    I have yet to have success with this but here is the code below: def _load_secrets(self)     service = client.connect(host="localhost", app="myapp", owner="admin", token=self.sessionKey)     self.secrets = service.storage_passwords   This function gets called in my __init__ function when the object is instantiated.   I am storing the secrets object in a class attribute to be accessible to all functions that would need to interact with this collection.  I have tried this in functions outside of the class I have created and this has failed as well. I have tried changing the owner to "nobody", and I have tried to change the scheme to "http"  and setting the verify arg to False, but this has not helped my issue.    I have passAuth enabled for "splunk-system-user" in my inputs.conf file to allow the use of the session key taken from the standard input ( I am getting the session key without issue).   It appears that I am connecting successfully but when I attempt to access the storage/passwords collection it fails.   Ps: I will be storing an apikey and credentials to retrieve them in here. I successfully store the credentials from my JS function for the setup, my issue is only with python.   Does anyone know how to fix this?   
Hi, We deployed the Splunk Add-On for Unix & Linux on a few AIX & Netezza servers as noticed a few issues with missing metrics. NETEZZA: All df metrics are not being returned (error message in _... See more...
Hi, We deployed the Splunk Add-On for Unix & Linux on a few AIX & Netezza servers as noticed a few issues with missing metrics. NETEZZA: All df metrics are not being returned (error message in _internal index shows the following Splunk_TA_nix/bin/df_metric.sh" df: unrecognized option '--output=source,fstype,size,used,avail,pcent,itotal,iused,iavail,ipcent,target' AIX Following IOstat metrics are not being returned. iostat_metric.rReq_PS iostat_metric.wReq_PS   Thanks, AKN
I decided to make a search with following situation.  However, I would like to enhance the performance that when user wanna search Name, it will only enable index A and B  but not index C Can I a... See more...
I decided to make a search with following situation.  However, I would like to enhance the performance that when user wanna search Name, it will only enable index A and B  but not index C Can I achieve it? Thanks a lot.   |multisearch [ index =A |search Name=* Results =*] [index =B | search Name=* Age=* Results=*] [index =C | search Name=* Age=*]
Hi, I have an unusual scenario for the data I am working with and would like to see if it's even possible to extract data this way. In brief, I parsed a value from my initial search query to a vari... See more...
Hi, I have an unusual scenario for the data I am working with and would like to see if it's even possible to extract data this way. In brief, I parsed a value from my initial search query to a variable using rex and now I want to use only that value as new query instead of sub-query. Workflow: Find all successful test runs for a suite (this is a long query) Find reporting_url via event on each run  Parse uuid from reporting_url (I used rex on raw data and saved it on variable like res_uuid) Search only that uuid as that has multiple test_id records showing count of Pass/Fail counts. (Eventually create a graph for the same) Trying to make a simple example: First query -> Gives test suite level record. Parse to get UUID value Second query -> Independent query using that UUID and then use that for making graph. Please note that 2nd query results not linked with 1st query and sub-search will only give one record.      (Apologies if it's a very common workflow but I was not able to search it easily)  
Hi, I have a list of hosts  that i want to check their status , so  I have created an if statement to filter out the ones that does not meet the if statement , then i have an action to ping on the ... See more...
Hi, I have a list of hosts  that i want to check their status , so  I have created an if statement to filter out the ones that does not meet the if statement , then i have an action to ping on the ones that met the IF statement ex:  host1, host2,host3,host4 if host==host1 OR host == host4  The next action would be  scan ONLY ( host1 , host4) I have the playbook working with all actions but i just could not figure out the way how to only process the hosts that meet the IF condition  Thanks   
I am getting the following errors when trying to Discover Content. I have tried searching online and am not finding any good reasons for this. I also tried downgrading app versions and got the... See more...
I am getting the following errors when trying to Discover Content. I have tried searching online and am not finding any good reasons for this. I also tried downgrading app versions and got the same errors. We just deployed Splunk recently and have some servers reporting in. Splunk itself has no errors or issues that it is reporting.