All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, Below is my Log: "{"log":"{'URI': '/api/**/***/search?', 'METHOD': 'POST', 'FINISH_TIME': '2021-Dec-15 12:15:04 CST', 'PROTOCOL': 'http', 'RESPONSE_CODE': 202, 'RESPONSE_STATUS': '202 ACCEPTED'... See more...
Hi, Below is my Log: "{"log":"{'URI': '/api/**/***/search?', 'METHOD': 'POST', 'FINISH_TIME': '2021-Dec-15 12:15:04 CST', 'PROTOCOL': 'http', 'RESPONSE_CODE': 202, 'RESPONSE_STATUS': '202 ACCEPTED', 'RESPONSE_TIME': 4.114464243873954} ","service_name":"Digdug/digdug","container":"Digdug-digdug-2","environment":"PROD"}"   Want to extract "RESPONSE_CODE" value  And show like below    RESPONSE_CODE Count 202 1 200 6 Thanks
To predict traffic, I'm building a time series model (ARIMA). I'm unable to save a fitted ARIMA Model.  Error: Error in 'fit' command: Algorithm "ARIMA" does not support saved models   I don't wis... See more...
To predict traffic, I'm building a time series model (ARIMA). I'm unable to save a fitted ARIMA Model.  Error: Error in 'fit' command: Algorithm "ARIMA" does not support saved models   I don't wish to retrain the arima model repeatedly as the SLAs to be met are tight in time.  Please recommend a solution to save the ARIMA model or any other algorithm/method to be able to do this better. 
Hi Could you please help me with below mentioned query How to create recurring Maintenance Window in Splunk ITSI   Thanks, Prasanth G
Hi All, I am displaying the names based on dates and used where condition to display only values that are greater than 100 (where runs  > 100 ).  Below is how the table shows , but I want to display... See more...
Hi All, I am displaying the names based on dates and used where condition to display only values that are greater than 100 (where runs  > 100 ).  Below is how the table shows , but I want to display the other values in the row with actual value instead of showing it as empty.  | where runs > 100 | xyseries Name dayOfDate runs Name Date1  Date2 Date3 Date4 Date5 Sachi 101         Kohli     108     ABD   104   105      
Hello Fellow Splunkers! I have an environment that's using Twistlock and is deployed in EKS. We are able to collect the majority of logs via Kubernetes logging, however our team really wanted to uti... See more...
Hello Fellow Splunkers! I have an environment that's using Twistlock and is deployed in EKS. We are able to collect the majority of logs via Kubernetes logging, however our team really wanted to utilize the application created for Twistlock (https://splunkbase.splunk.com/app/4555/). Has anyone else run into issues using the app for this architecture type? If not, has anyone successfully configured this application to use the predefined sourcetypes shown in the app? Any guidance will be greatly appreciated!  
Hello, splunk show-decrypted does not seem to work on UF, is there another solution to recover forgot admin password? Thanks.
Requesting assistance with removing characters from logs during search time.  Sample Data:  "{"log":"{\"@t\" "2021-12-15T16:26:36.1571090Z\",\"@m\" "\\\"http\\\" \\\"GET\\\" \\\"/api/v1/" Trying... See more...
Requesting assistance with removing characters from logs during search time.  Sample Data:  "{"log":"{\"@t\" "2021-12-15T16:26:36.1571090Z\",\"@m\" "\\\"http\\\" \\\"GET\\\" \\\"/api/v1/" Trying to remove the extra \ \\ that came with the data via HEC.
  [new] DATETIME_CONFIG=/etc/apps/Test/local/datetime.xml SHOULD_LINEMERGE=false BREAK_ONLY_BEFORE=\nExecution\sServer CHARSET=UTF-8 TIME_FORMAT= %H:%M:%S.%3N MAX_EVENTS=10000 SEDCMD-test=s/E... See more...
  [new] DATETIME_CONFIG=/etc/apps/Test/local/datetime.xml SHOULD_LINEMERGE=false BREAK_ONLY_BEFORE=\nExecution\sServer CHARSET=UTF-8 TIME_FORMAT= %H:%M:%S.%3N MAX_EVENTS=10000 SEDCMD-test=s/Ex\w.*\nS\w+.*\n+\+-.*\n\|\s+\w.*\n\+-.*|\|Ste\w.*\n\|P\w.*\n\|T\w.*\n\|V\w.*\n+\|\n\|Va\w.*|\|Para.*|\+-.*//g TRUNCATE=0
Hello, I have 10 servers for same purpose. If one server is down others will be active so that no loss of business continuity.  We have ABC.log generates across all the servers with same content. W... See more...
Hello, I have 10 servers for same purpose. If one server is down others will be active so that no loss of business continuity.  We have ABC.log generates across all the servers with same content. We need to add all the 10 servers in serverclass.conf and we did  the same. But we are getting ABC.log to splunk multiple times I.e., 5 to 6 times or one event repeating 5 to 6 times.  I appreciate any help to avoid mutiple ingestion of same log from different servers or avoid duplicate events.  Added crcSalt in inputs.conf, but not working.  Thanks 
Hello, Due to a specific requirement we have to install a Splunk Universal Forwarder acting as "intermediate forwarder". Basically it will receive data via TCP (to leverage persistent queue), and i... See more...
Hello, Due to a specific requirement we have to install a Splunk Universal Forwarder acting as "intermediate forwarder". Basically it will receive data via TCP (to leverage persistent queue), and it has to forward them in output in HTTP. Forwarding data in HTTP is possible since Splunk Universal Forwarder 8.x: https://docs.splunk.com/Documentation/Forwarder/8.2.3.1/Forwarder/Configureforwardingwithoutputs.conf#Configure_the_universal_forwarder_to_send_data_over_HTTP   Here the set-up: # inputs.conf [tcp://9997] persistentQueueSize=1000MB connection_host=none disabled=false # outputs.conf #Example from Splunk [httpout] httpEventCollectorToken = eb514d08-d2bd-4e50-a10b-f71ed9922ea0 uri = https://10.222.22.122:8088   What we also want to achieve is to forward only data received via TCP, and to do not forward the Splunk UF internal logs. I didn't found a sort of _HTTP_ROUTING setting (like for example _TCP_ROUTING) to be put in inputs.conf Therefore listing all the Splunk UF inputs with that command: /opt/splunkforwarder/bin/splunk btool inputs list --debug   I was thinking about this configuration: #props.conf [source::/opt/splunkforwarder/...] force_local_processing = true TRANSFORMS-null = setnull #transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Do you think it is going to work? Maybe another option could be tag TCP inputs host based on DNS or IP, and then move to nullQueue all the logs produced by the Splunk UF: #inputs [tcp://9997] persistentQueueSize=1000MB connection_host=dns disabled=false #props.conf [host::mysplunkUFhostname] force_local_processing = true TRANSFORMS-null = setnull #transforms.conf [setnull] REGEX = . DEST_KEY = queue FORMAT = nullQueue   Do you see any other possible configuration?   Thanks a lot, Edoardo
Hi, I need an help with splunk search query where in an incident need to be generated for a log backup failure after 3 consecutive failures.  /nanoo1
I am using Splunk to Search historical data in a virtual index but I have noticed that the default date_year is being incorrectly added.   My data is from 2020 and when I search I specified a sourc... See more...
I am using Splunk to Search historical data in a virtual index but I have noticed that the default date_year is being incorrectly added.   My data is from 2020 and when I search I specified a source pointing to a  particular directory based on the date at which it was ingested.   Unfortunately the logs in question have a timestamp in the following format %b %e %H:%M:%S i.e no year..   When I run my search looking in the folder for 15/08/2020 some of the default dates are 2020 but some are 2021. index=vix_web        source="/data/xx/xxx/xxx/xxx/2020/08/15"   Having done some research on how the default times are extracted, it would seem  datetime.xml is used but I still don't know where the year is extracted from.   Can anyone help
Hello, I would like to center the dates of my timechart (column) :         I'm using the timechart command in order to get a table that is then transformed in a column chart. How can I do ... See more...
Hello, I would like to center the dates of my timechart (column) :         I'm using the timechart command in order to get a table that is then transformed in a column chart. How can I do this? Thank you. Best regards,
Hi, I am getting the following error on my search head whenever i run query in a newly created app. Search results might be incomplete: the search process on the peer:indexer1 ended prematurely. C... See more...
Hi, I am getting the following error on my search head whenever i run query in a newly created app. Search results might be incomplete: the search process on the peer:indexer1 ended prematurely. Check the peer log, such as $SPLUNK_HOME/var/log/splunk/splunkd.log and as well as the search.log for the particular search. [Indexer 1] Search process did not exit cleanly, exit_code=255, description="exited with code 255". Please look in search.log for this peer in the Job Inspector for more info. In the search.log this is the error" 12-15-2021 05:42:06.881 ERROR dispatchRunner - RunDispatch::runDispatchThread threw error: Application does not exist: " The above error is present for on all four of our indexers. What is the cause for this? How do we fix this error?   Thank You!
Hi , when I'm deploying new changes to my services I want to compare the last day's error logs to the last week to see if there has been an increase for a specific message.  I have trouble figuring o... See more...
Hi , when I'm deploying new changes to my services I want to compare the last day's error logs to the last week to see if there has been an increase for a specific message.  I have trouble figuring out how I display the counts for the different time ranges by message. This kind of gives the correct result but the same message for last and this week will not be grouped correctly.       sourcetype="my pod" level="error" | eval marker = if (_time < relative_time(now(), "-1d@d"), "lastweek", "thisweek") | multireport [where marker="thisweek" | stats count as this week by message] [where marker="lastweek" | stats count as last week by message]       Grateful for any help
if i have employees list .for each employee there are two status logged in and logged out, i need to find out the each users last status and if user last status is logged out i need to count how many... See more...
if i have employees list .for each employee there are two status logged in and logged out, i need to find out the each users last status and if user last status is logged out i need to count how many employees get logged out
Hi I have created an app using the Add-on builder, by:   collectionName = "myKVStore" service = connect(scheme=scheme, host=splunkd_host, port=splunkd_port, token=helper.session_key, owner="nob... See more...
Hi I have created an app using the Add-on builder, by:   collectionName = "myKVStore" service = connect(scheme=scheme, host=splunkd_host, port=splunkd_port, token=helper.session_key, owner="nobody") if not collectionName in service.kvstore: service.kvstore.create(collectionName)     I would like to see the data in my kvstore, IN Splunk, i have tried to query the api with no luck, also tried to define it transforms.conf:   [myKVstore] external_type = kvstore case_sensitive_match = false collection = myKVstore fields_list = _key, ....     If i then query it by: |inputlookup ... i get "collection does not exist" But it works fine within my code Ideas?
I am stuck with a query where I am trying to pass the field value from sub search to parent search: Query:    index=f5 sourcetype="*f5*" earliest=-1d@d latest=d@d [| inputlookup user where country... See more...
I am stuck with a query where I am trying to pass the field value from sub search to parent search: Query:    index=f5 sourcetype="*f5*" earliest=-1d@d latest=d@d [| inputlookup user where country="US" | fields UserName | rename user_name ]   Explanation: The field name which is going to match from the subsearch is the user_name, now in the parent search there are two fields for user that is user_name and Account_name and i need both of them in the end result (user_name contains internal users/ Account_name contains external users). I tried using coalesce to merge both the fields in the parent search but eval pops an error. Can anyone please help me in solving this problem ?
I want to see the result values of Src_ip and dst_ip are the same and "ok" and the number of these result values. What should I do? The code I made doesn't work well. index="my_index" |eval che... See more...
I want to see the result values of Src_ip and dst_ip are the same and "ok" and the number of these result values. What should I do? The code I made doesn't work well. index="my_index" |eval cheack=if(html_code==200,"error","OK") |stats list(src_ip) as src_ip list(dst_ip) as dst_ip by cheack |table src_ip , dst_ip , cheack , count  
Hello, I am using autoencoder in DLTK. I want to add partial fit functionality in it. I was able to add partial_fit function in model for autoencoder in /srv/app/model/. Also made required changes ... See more...
Hello, I am using autoencoder in DLTK. I want to add partial fit functionality in it. I was able to add partial_fit function in model for autoencoder in /srv/app/model/. Also made required changes in /srv/app/index.py. However those changes does not reflect while using SPL query in DLTK. I am not able to figure out when is the index.py file loaded if we make changes in it.  Can someone please help to understand when is the index.py file in /srv/app loaded?