All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hello, Does anyone know if there's a way to monitor/track API calls to a Splunk Cloud instance?   Looking particularly for the IP and user account being used for the API call to the cloud instance. ... See more...
Hello, Does anyone know if there's a way to monitor/track API calls to a Splunk Cloud instance?   Looking particularly for the IP and user account being used for the API call to the cloud instance.  I've done some searching and came up empty.   Thanks.
Hi, I want to extract the fields Name, Version, VendorName, usesLicensing, LicenseType, ExpiractDateString, LicenseKey, SEN based on delimiter(:) from the below raw data Could someone please he... See more...
Hi, I want to extract the fields Name, Version, VendorName, usesLicensing, LicenseType, ExpiractDateString, LicenseKey, SEN based on delimiter(:) from the below raw data Could someone please help me with the query for field extraction.
I've built a Splunk App with lookups.  On app upload the lookup definitions are created but the CSV are not. App Folder Structure   my_app ├── bin │ ├── README │ └── app.manifest ├── default │... See more...
I've built a Splunk App with lookups.  On app upload the lookup definitions are created but the CSV are not. App Folder Structure   my_app ├── bin │ ├── README │ └── app.manifest ├── default │ ├── app.conf │ ├── data │ │ └── ui │ │ ├── nav │ │ │ └── default.xml │ │ └── views │ │ └── README │ └── transforms.conf ├── lookups │ ├── test_falsePositive.csv │ └── test_metadata.csv └── metadata └── default.meta     default.meta   [] access = read : [ * ], write : [ admin, mgmt ] ### LOOKUPS [lookups] export = system [transforms] export = system     transforms.conf   [test_falsePositives] CAN_OPTIMIZE = 1 CLEAN_KEYS = 1 DEPTH_LIMIT = 1000 KEEP_EMPTY_VALS = 0 LOOKAHEAD = 4096 MATCH_LIMIT = 100000 MV_ADD = 0 SOURCE_KEY = _raw WRITE_META = 0 batch_index_query = 0 case_sensitive_match = 0 disabled = 0 filename = test_falsePositive.csv [test_metadata] CAN_OPTIMIZE = 1 CLEAN_KEYS = 1 DEPTH_LIMIT = 1000 KEEP_EMPTY_VALS = 0 LOOKAHEAD = 4096 MATCH_LIMIT = 100000 MV_ADD = 0 SOURCE_KEY = _raw WRITE_META = 0 batch_index_query = 0 case_sensitive_match = 0 disabled = 0 filename = test_metadata.csv    
Hello, I am on splunk 7.0.2, which is configured in a distributed environment. I installed splunk connect db on a SHC. Then from one of my search heads in the UI, I added my first input. (Note the c... See more...
Hello, I am on splunk 7.0.2, which is configured in a distributed environment. I installed splunk connect db on a SHC. Then from one of my search heads in the UI, I added my first input. (Note the connection to the db is fine, executing the sql query during setup, yields the expected result.) However, after adding the input, I can see that connect db does not run the query at all, it ignores the frequency at which the query would run. This is seen in $splunk_home/var/log/splunk/splunk_app_db_connect_server.log The data is not saved at all, I am not sure what am I missing or doing wrong.  What is quite strange is that I cannot find any errors in the log that would help me at least debug why this is being caused, besides this error:  ch.qos.logback.core.Appender.error in splunk_app_db_connect_health_metrics.log Note: The SHC is configured properly and is connected with the indexers.  I have been facing a lot of issues with this.  Please help me find the solution or hint me towards how I can debug this. Thanks, Mark
Hello , I am having an issue in splunk , My buckets never fixed up and the SF and RF were never met , I have 3 indexers with a CM  in Cluster and 3 search heads in cluster and 6 indexer with a CM i... See more...
Hello , I am having an issue in splunk , My buckets never fixed up and the SF and RF were never met , I have 3 indexers with a CM  in Cluster and 3 search heads in cluster and 6 indexer with a CM in One cluster , Both of the 6 indexer and 3 indexer cluster are integrated to one SHC.  , I set 2 SF and 2 RF for this insatnce , But when i check on one of the bucket , Its saying Replication count by site , default 1 and search count by site default 1 and origin site showing default , I am not sure i set 2 SF and RF dont know why its showing as 1 .   @somesoni2 @woodcock 
Good day I'm trying to write a python script that will be called from Splunk search. The script has a generating command: @Configuration(type='reporting') class getInfo(GeneratingCommand): def ... See more...
Good day I'm trying to write a python script that will be called from Splunk search. The script has a generating command: @Configuration(type='reporting') class getInfo(GeneratingCommand): def generate(self): .... .... After the generating command I want to send an email. I have managed to get the generating command working, as well as the email sending working, but not both in the same script. For the generating command, the commands.conf file details are: [kvstoreupdate] filename = kvstore_update.py chunked = true generating = true For the sendemail script the commands.conf is: [testemail] filename = email_test.py streaming = false run_in_preview = false passauth = true required_fields = changes_colorder = false supports_rawargs = true undo_scheduler_escaping = true is_risky = true supports_multivalues = true I have found that the testemail script does not work if I set "chunked=true" i.e. make use of SCPv2. Conversely, the generating command does not work if "chunked=true" is not set - I get the error:  Script output = "chunked 1.0,426,0.... .... .... Failed to parse transport header: authString:<auth> .... ...." I'm wondering if it is possible to do both of these actions, since they seem to be compatible with different versions of SCP...?
I am using the predict function to try to forecast out about an hour into the future for volume. In doing so it seems that the future predictions almost alternate between two different points. For ex... See more...
I am using the predict function to try to forecast out about an hour into the future for volume. In doing so it seems that the future predictions almost alternate between two different points. For example into the future would be 7, 3, 7, 3, 7, 3...., I would expect it to form a smoother line. I am using 6 weeks of data at a 15 minute span. Has anyone had issues with this before and know how to resolve this or what might be causing it?
Hello, I'm currently creating saved searches from Javascript inside a setup page. When trying to add the `counttype`, `relation` and `quantity` parameters however, the request fails with an error 4... See more...
Hello, I'm currently creating saved searches from Javascript inside a setup page. When trying to add the `counttype`, `relation` and `quantity` parameters however, the request fails with an error 400. I had a look at the documentation (https://docs.splunk.com/Documentation/Splunk/7.0.2/RESTREF/RESTsearch#saved.2Fsearches) which does not list these as available parameters. I should I set these using the REST API ?
All, In process of trying to figure out architecture/hardware requirements for upgrade our current all-in-one deployment.  Current Architecture Azure Windows 2016 56GB RAM, 16 vCPUs with 2 TB Data... See more...
All, In process of trying to figure out architecture/hardware requirements for upgrade our current all-in-one deployment.  Current Architecture Azure Windows 2016 56GB RAM, 16 vCPUs with 2 TB Data Drive (7500 Max IOPs) Data drive (SSD) @98% capacity. Roles - Indexer, Deployment Server, Search Head Web Server Current Data Ingestion Rate is 123.97 GB/Day Deployment Goals Use as few Azure Windows 2019 Server(s) as possible with minimum hardware to reduce costs.  Increase 2TB Drive to 4TB GPT Drive (or delete older unnecessary data and stay at 2TB).  Preserve existing knowledge objects, index data and increase performance. Questions 1) Can the Search Head, Deployment Server and First Indexer be on its own server (as it is now) and Second Indexer be on a  second server (by itself)?  In other words require only two servers to be utilized or do I have to have three servers as most documents depict? 2) If deploying a second Indexer does it have to have the same size disk as current indexer?  And if so, how is current data on almost full 2TB Drive dealt with now that there is a secondary indexer?    3) Anyone have experience with or know the feasibility of leveraging Search Head (or other components) in a Virtual (Azure) Containerized usage?  Again, in order to reduce Hardware Costs. 4) Any other suggestions appreciated to aid in cost reduction.   Best regards, Greg
Hello, recently I was configuring exceptions and ignore messages guided by this article. I was having the following exception that needs to be ignored I get the exceptions info from the stack trace... See more...
Hello, recently I was configuring exceptions and ignore messages guided by this article. I was having the following exception that needs to be ignored I get the exceptions info from the stack trace was as follows: SamisWsFault:WebservicesFault: SamisWSFault:WebServicesFault:.com.ibm.ws.webservices.engine.WebServicesFault <some messages> at rest of stack trace.... I created three rules to include as follows:   1-      class=SamisWsFault:WebservicesFault:     message=Is not empty   2-  class=SamisWsFault:WebservicesFault:com.ibm.ws.webservices.engine.WebServicesFault   message=Is not empty 3-     class=com.ibm.ws.webservices.engine.WebServicesFault     message=Is not empty and also I added the messages I want to ignore in the ignore messages sections but I still see these exceptions keep flooding into my applications and apparently these rules didn't work anyone could help me here where I go wrong. Thanks in advance ^ Edited by @Ryan.Paredez for readability. This conversation was originally on this TKB article: How do I exclude errors and exceptions from detection? 
Hi, I want to search the index with the eventtype which has "service" or "window" in the value index=sdsf | search eventtype="*service*" or "*window*"  | stats count by eventtype this is not worki... See more...
Hi, I want to search the index with the eventtype which has "service" or "window" in the value index=sdsf | search eventtype="*service*" or "*window*"  | stats count by eventtype this is not working. can you help if the OR will work or not.
Dears, We are monitoring IBM WebSphere Applications with Appdynamics we are using java app agent version 20.8 but we get huge flood of exceptions type (com.singularity) i need to know if this except... See more...
Dears, We are monitoring IBM WebSphere Applications with Appdynamics we are using java app agent version 20.8 but we get huge flood of exceptions type (com.singularity) i need to know if this exception is related to Appdynamics or related to our application here is i will attach screenshot for your refrences.
Hi, I'm very new to Splunk,  and struggling to find a way to filter a specific log which is consuming a large proportion of my license. I have a Cisco ASA set up to send events to Splunk UDP port a... See more...
Hi, I'm very new to Splunk,  and struggling to find a way to filter a specific log which is consuming a large proportion of my license. I have a Cisco ASA set up to send events to Splunk UDP port as syslog. I've restricted the logs to what I want to see by using the Built in filter tools within the ASA.  From what I can see within the forum, there are lots of people asking how to filter based off Syslog ID, but I want to filter out based off Syslog ID 302013 and IP xxx.xxx.xxx.xxx, as I want to keep 302013 apart from anything containing that specific IP. I don't even know where to start, but I know this can't be done from the cisco device, so has to be done on the Splunk server. Would really appreciate someone pointing me in the right direction. Thanks, Tim
Hello everybody, we are monitoring via Universal Forwarder several directories with a large XML file in there (around 1000 lines). These files changes every few seconds, and the change also involves... See more...
Hello everybody, we are monitoring via Universal Forwarder several directories with a large XML file in there (around 1000 lines). These files changes every few seconds, and the change also involves the timestamp which is written in the first 256 bytes of the file. I need to ingest these files entirely at every change but, instead, Splunk ingest me these files only one time every some hours or even days. Do you have any suggestion on how can I fix this? Here's the props.conf in my heavy forwarders (we have a distributed environment): [xml_atm] TRANSFORMS-routing=xmlatm-route SHOULD_LINEMERGE=true LINE_BREAKER=(?:restart)([\r\n]+) CHARSET=ISO-8859-1 CHECK_METHOD = modtime MAX_EVENTS=4000 TRUNCATE=0 disabled=false TIME_PREFIX=restart-flag=" REPORT-xmlext=xml-extr While inputs.conf in UF is this: [monitor://D:\ABC\Monitor\Monitor\Inputs\*\*.xml] disabled = 0 host_segment = 5 index = my_index sourcetype = xml_atm Thanks in advance.
Hi , We are using Splunk Cloud instance. Is there any way that  we can  try in Splunk without using JS code. "How to add tooltip for individual cells of the table's specific column (without using... See more...
Hi , We are using Splunk Cloud instance. Is there any way that  we can  try in Splunk without using JS code. "How to add tooltip for individual cells of the table's specific column (without using JavaScript)" . Any inputs will be appreciated . https://www.splunk.com/en_us/blog/tips-and-tricks/add-a-tooltip-to-simple-xml-tables-with-bootstrap-and-a-custom-cell-renderer.html This link helps with JavaScript  Can this be implemented  without JavaScript ? Thank you
Is there a way to create Chatbot within Splunk which should answer and function according to user questions
I am trying to calculate lag time but have the following issues: _time is the same for each event as the data is indexed in chunks. I am trying to take the highest result from field access-time and... See more...
I am trying to calculate lag time but have the following issues: _time is the same for each event as the data is indexed in chunks. I am trying to take the highest result from field access-time and calculate the difference between the second highest result. Something like |eval resultA - resultB.  How do I get the 2 latest results from field access-time and calculate the difference 2020-11-13 08:18:37 1605254674 2020-11-13 08:18:37 1605254590 2020-11-13 08:18:37 1605253080 2020-11-13 08:18:37 1605252671 2020-11-13 08:18:37 1605251083 2020-11-13 08:18:37 1605250993 2020-11-13 08:18:37 1605249063 2020-11-13 08:18:37 1605247382 2020-11-13 08:18:37 1605245462 2020-11-13 08:18:37 1605243784 2020-11-13 08:18:37 1605241862 2020-11-13 08:18:37 1605240185 2020-11-13 08:18:37 1605238263 2020-11-13 08:18:37 1605236583 2020-11-13 08:18:37 1605234662 2020-11-13 08:18:37 1605232983 2020-11-13 08:18:37 1605231063 2020-11-13 08:18:37 1605229384 2020-11-13 08:18:37 1605227467 2020-11-13 08:18:37 1605225783 2020-11-13 08:18:37 1605223863 2020-11-13 08:18:37 1605222196 2020-11-13 08:18:37 1605220274 2020-11-13 08:18:37 1605218605 2020-11-13 08:18:37 1605216684 2020-11-13 08:18:37 1605214996
I have field src_ip in my data.   My lookup fields: ip1,  ip2,  ip3, ip4,  user   What I want is to find matching pairs in  src_ip and ip1,  ip2,  ip3, ip4 and OUTPUT the name of user, who this s... See more...
I have field src_ip in my data.   My lookup fields: ip1,  ip2,  ip3, ip4,  user   What I want is to find matching pairs in  src_ip and ip1,  ip2,  ip3, ip4 and OUTPUT the name of user, who this src_ip belongs to.   How can I do this?
Hi, I need to assign the values of a field to a new field and group with the new field. For ex.  Field-1  Field2 AppA     xxxx AppA    yyyy AppA    zzzz AppB    xxxx AppB     yyyy I want to ... See more...
Hi, I need to assign the values of a field to a new field and group with the new field. For ex.  Field-1  Field2 AppA     xxxx AppA    yyyy AppA    zzzz AppB    xxxx AppB     yyyy I want to be able to have a stats count with a new field or value for everything that is there with a combination of Field1 and Field2.  i.e in the above result the new search field 3 may be should return 3 and 2 for each apps.  I was told this might be achieve-able through lookup definitions and tables , but I am new to it.  Any help will be great.   
the token value "*" is passed on the first click after page loads  but not working afterwards and not passing the token value *. how to fix it? @kamlesh_vaghela @Anonymous        <input type="l... See more...
the token value "*" is passed on the first click after page loads  but not working afterwards and not passing the token value *. how to fix it? @kamlesh_vaghela @Anonymous        <input type="link" value="link4"> <label>Choose a sourcetype:</label> <choice value="link4">Internal Event Count</choice> <change> <condition value="link4"> <set token="SELECTED_PU">*</set> <set token="SELECTED_PL">*</set> </condition> </change> </input>