All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi, i'm trying to create a set of playbooks to unit test other playbooks.   is it possible to run a playbook without providing an event id?   thanks.   Ansir
Hey, I have a Splunk Enterprise environment with servers cluster of 4 SHDs, 5 HFDs and 3 Indexers. In addition there is a number of alerts that are configured on my Search Heads, the alerts use the... See more...
Hey, I have a Splunk Enterprise environment with servers cluster of 4 SHDs, 5 HFDs and 3 Indexers. In addition there is a number of alerts that are configured on my Search Heads, the alerts use the 'collect' command which indexes the returned events from the query to some index. For example: index=Example ... | collect index=production It's worked for some time, approximately 6 months. But now, when I try to search for events on index "production", I get 0 events. I searched for errors and bugs with the support of a Splunk specialist, but we didn't find a solution. One speculation that we had was the 'stashParsing' queue which configured on the SHDs and used by the 'collect' command. We found on the '_internal' index logs about the queue 'max_size=500KB' and 'current_size'. The 'current_size' values were 0 99.9% of the time and 494, 449, 320, 256 0.001% of the time on the last 30 days. I have tried increasing the 'max_size' of the queue I have created a file named 'server.conf' in the following location: $SPLUNK_HOME/etc/shcluster/apps/shd_base. The file content is: [stashparsing] maxsize=600MB I have distributed this to the SHDs cluster, but it did not seem to have any effect. Splunk version: 8.1.3 Linux version: Red Hat Linux Enterprise 7.8 This is an air-gapped environment so I cannot attach any logs or data.
Resourceinitializationerror: failed to validate logger args: Options "https://prd-p-88jca.splunkcloud.com:8088/services/collector/event/1.0": dial tcp 52.203.227.66:8088: connect: connection timed ou... See more...
Resourceinitializationerror: failed to validate logger args: Options "https://prd-p-88jca.splunkcloud.com:8088/services/collector/event/1.0": dial tcp 52.203.227.66:8088: connect: connection timed out : exit status 1  1. I follow this link https://www.splunk.com/en_us/blog/platform/docker-amazon-ecs-splunk-how-they-now-all-seamlessly-work-together.html    >--- for task logs  2. https://bobcares.com/blog/use-splunk-log-driver-with-ecs-task-on-fargate/     >--this one also     please give your answer to this post 
Hi all, Could some please help me with this query. I have 3 different sources from which i want to match the fields. I am  Source A contains K_user, ID Source B contains RECID, USER_RESET Sou... See more...
Hi all, Could some please help me with this query. I have 3 different sources from which i want to match the fields. I am  Source A contains K_user, ID Source B contains RECID, USER_RESET Source C contains USER, NAME i have do the query in 2 steps 1.  Join A and B using ID. (RECID=ID) and get USER_RESET 2. Join the result from step 1 with C. Match K_USER and USER_RESET to get the name from source C.  if i explain using example Source A K_USER  ID ABN        1 XYZ          2   Source B RECID   USER_RESET 1.            MNP 3.             IJK   SOURCE C USER  NAME ABN  John XYZ   Mary MNP Philip IJK  Cathy   Final result should look like K_USER | ID | USER_RESET | NAME(K_USER) | NAME(USER_RESET) ABN 1 MNP John Phillip Can i achieve  this without using join. Thanks in Advance!!
Hi, I created a Splunk app with React but when I symlinked into Splunk's application directory, I receive the expected message as follows: BUT .... even after restarting my Splunk instance, the ... See more...
Hi, I created a Splunk app with React but when I symlinked into Splunk's application directory, I receive the expected message as follows: BUT .... even after restarting my Splunk instance, the Splunk app does not appear on my list of apps on my Splunk instance. Why would this be and how can I fix this? Many thanks.
Hiya,  I am trying to use the ITSI REST API to update entities in my splunk development. I am able to create entities and overwrite pre existing entities which have the same `_key` value.  But I... See more...
Hiya,  I am trying to use the ITSI REST API to update entities in my splunk development. I am able to create entities and overwrite pre existing entities which have the same `_key` value.  But I want to just update an entity without overwriting/deleting any data. So say I have an entity with 2 info fields: "cpu_cores = 4" "memory = 32" and I just want to update the "cpu_cores" field to be "cpu_cores = 8" and leave the "memory" field the same, but whenever I execute the post request it overwrites the info fields and deletes the "memory" Field. Below is the Endpoint I am using and the JSON object to update the entity: Endpoint https://<my_ip>:8089/servicesNS/nobody/itsi/itoa_interface/entity/bulk_update/?is_partial_data=1 JSON Object     [ { "_key":"aa_entity_1", "title": "aa entity 1", "object_type": "entity", "description": "Just a test", "informational": { "fields": [ "cpu_cores" ], "values": [ "8" ] }, "cpu_cores": [ "8" ] } ]       This JSON creates the entity fine, and what I understand from the documentation is that the "is_partial_data=1" should mean that it will only update data and not remove any, I have looked around and tried different things with the "is_partial_data=1". I've tried putting it into the  JSON Object as "is_partial_data":true, In the endpoint I saw somewhere that the "/" shouldn't be present before the "?is_partial_data=1", but this didnt work either. Any help would be appreciated. Addtional Info: Splunk Enterprise: 9.0.3 ITSI Version: 4.13.2 Using Postman to do post request
Hi Team, I am looking for the help to send Report.  I have a scheduled report which is running every hour. can you please advise with search query. if I create new alert and  if alert trigge... See more...
Hi Team, I am looking for the help to send Report.  I have a scheduled report which is running every hour. can you please advise with search query. if I create new alert and  if alert trigger, scheduled report should be sent to recipients. I am aware about the CSV/ PDF attached. looking for something like to send scheduled report as result for notification if alert triggered .
I have configuration  for TA-MS-AAD and we see that we have delays  trying to understand how _time is set 
Hello dear community Can you please advise me. My team is complaining that not all data comes from the HEC token from Kubernetes. I don't see any errors in _internal at this index. But I notice... See more...
Hello dear community Can you please advise me. My team is complaining that not all data comes from the HEC token from Kubernetes. I don't see any errors in _internal at this index. But I noticed something interesting in the index="_introspection" there are breaks in the data. Could this be related? And how to fix it?  
hai All, i have events like below  from how can i filter events if for ex: 6th character in C*E**M  IS M want to filter all OR 6th character is H how can i filter all those please assist C*E*... See more...
hai All, i have events like below  from how can i filter events if for ex: 6th character in C*E**M  IS M want to filter all OR 6th character is H how can i filter all those please assist C*E**M****} JAWS Process to copy the legacy Virtu ORDERDETAILSs data from IMFT to network folder C*E**M****} JAWS Process to copy the legacy Virtu Orders data from IMFT to network folder C*E**M****} box that contains the processes to load Portware EOD files to APP_ETT database C*E**M****} box that load the OMS legacy tables 1 11.111% C*E3VL****} Box that contains the jobs to download and process the ITG Placement Inbound file C*E**H****}ox that contains the processes t
Hi all,  I am at an impasse, and I do need some ideas how to overcome that. So here is the challenge.  1)I have CSV data coming into SPLUNK via UF(50+ sources); 2)Data is slightly restructure... See more...
Hi all,  I am at an impasse, and I do need some ideas how to overcome that. So here is the challenge.  1)I have CSV data coming into SPLUNK via UF(50+ sources); 2)Data is slightly restructured (not even always) , and later sent as DB_Output via SPLUNK DBConnect. 3)Data is output either using upsert(for several of the DBs, that need it), or without upsert(for the rest) 4)The automated search(for the DBoutput) is performed periodically (about once/hour, for X-amount of hours back), and it covers our needs regarding newly added data into previously added data sources. HOWEVER, and here is the main challenge, 5)When a new source is added, the CSV source files are already full of old data( older than X amount of hours, some times months) , thus though the data is indexed, it does not get DB-outputted(great word). However I have to send the old data as well. 6) Automated search for "all time" all the time is not applicable( 50+ sources, with often 10+ source files each) , the amount of "select"s sent to the target DB is too much and causes too much time for splunk to search plus the target DB is overwhelmed and stops responding. At this moment, I have a workaround solution , that involves modifying the automated output to search "All time" for that particular source, for 1 iteration, and then I change it back to normal, but, the sources keep increasing their numbers, and this solution is more and more inconvenient. I have a feeling that there is a better solution(easy solution) but it eludes me. If anyone could share ideas, that would be helpful. Have a great day!    
Hello All, We have issue wherein JSON files are not coming in intermittently into Splunk from a SQS based S3 input. The JSON file id generated everyday at 3 AM which is generally ingested into Sp... See more...
Hello All, We have issue wherein JSON files are not coming in intermittently into Splunk from a SQS based S3 input. The JSON file id generated everyday at 3 AM which is generally ingested into Splunk except for times when eventhough the file gets generated, but the file is not ingested on Splunk. Unfortunately, there is no logs whatsoever generated on Splunk internal logs for times when the file is not ingested into Splunk. Can anyone suggest what can be the issue here or if this has been encountered by someone? The version of aws add-on being used here is, 5.0.4 and the Splunk version is 8.1.5.
I'm trying to create a dashboard to find the old version and new version of splunk from the logs  but unable to find it.
Seeing different results when performing similiar searches and not sure on the reason.  base search is the same for both    |timechart span=5m count(eval(if(event=="Started",total,0))) as "star... See more...
Seeing different results when performing similiar searches and not sure on the reason.  base search is the same for both    |timechart span=5m count(eval(if(event=="Started",total,0))) as "started, count (eval(if(event =="Completed",total,0))) as "completed" |eval divergence = completed-started second search is |timechart span=5m count(eval(event=="Started")) as "started, count (eval(event =="Completed")) as "completed" |eval divergence = completed-started    they both produce same results but reversed: first query  time started completed divergence time 18499 18517 18 time 18426 18422 -4   second query time started completed divergence time 18517 18499 -18 time 18422 18426 4   any help will be appreciated  
Hi, is there any other work around to integrate Tripwire Enterprise to Splunk>Cloud? Is there any third party app for this one? Thank you in advance. 
Hi, I have the below output : 1/16/2023 7:51:43 AM 1EE8 PACKET 000001D9C25E6180 UDP Rcv 10.8.64.132 646b Q [0001 D NOERROR] A (6)framer(3)com(0) UDP question info at 000001D9C25E6180 Socket = 940... See more...
Hi, I have the below output : 1/16/2023 7:51:43 AM 1EE8 PACKET 000001D9C25E6180 UDP Rcv 10.8.64.132 646b Q [0001 D NOERROR] A (6)framer(3)com(0) UDP question info at 000001D9C25E6180 Socket = 940 Remote addr 10.8.64.132, port 55646 Time Query=9030678, Queued=0, Expire=0 Buf length = 0x0fa0 (4000) Msg length = 0x001c (28) Message: XID 0x646b Flags 0x0100 The desired output name=framer.com IP=10.8.64.132 I using regex:  sourcetype=DNSlog |rex field=_raw "NOERROR]\W+(?P<name>.*)\sUDP \S.*\s Socket.*\s Remote addr\W+(?P<IP>.*)," | rex mode=sed field=name "s/[\d;()]+//g" |stats count by name IP   My below code isn't working, can you please help me?
Messages Nov 20 Dec 20 Jan 20 Feb 20 Messge 0 77 1 44 89 Messge 1 1 3 5 15 Messge 2 11 0 4 23 Messge 3 1 0 0 0 Messge 4 9... See more...
Messages Nov 20 Dec 20 Jan 20 Feb 20 Messge 0 77 1 44 89 Messge 1 1 3 5 15 Messge 2 11 0 4 23 Messge 3 1 0 0 0 Messge 4 9 5 0 0 Messge 5 1 1 0 0 Messge 6 1 1 0 0 Messge 7 0 1 0 0     i want to color the cells based on range according to the rows. for eg: In Messge 0 row , color the cells acc to if value >75 then green,  if value <10 then red Messge 0 77 1 44 89 In Messge 1 row,  color the cells acc to if value >10 then green,  if value <5 then red Messge 1 1 3 5 15
Hello, I have the following query in one of the panels in my dashboard.       | mstats p95(prometheus.container_memory_working_set_bytes) as p95_memory_bytes span=1m where pod=sf-mcdata--hy... See more...
Hello, I have the following query in one of the panels in my dashboard.       | mstats p95(prometheus.container_memory_working_set_bytes) as p95_memory_bytes span=1m where pod=sf-mcdata--hydration-worker* AND stack=$stackLower$ by stack | stats min(p95_memory_bytes) as min_p95_memory_bytes by _time | timechart span=1m count as Availability | eval Span=1 | stats sum(Availability) as totalAvailability, sum(Span) as totalSpans | eval AvailabilityPercent = 100*(totalAvailability/ totalSpans) | fields AvailabilityPercent       There are some stacks that return too many events for this metric and this causes a timeout and then the search fails. Is there a way to optimize this query to work with a lot of events?
Lately, I have been reading the Splunk Validated Architectures (SVAs) [found here]  The document states in the Introduction that there is a tool called "Interactive Splunk Validated Architecture (i... See more...
Lately, I have been reading the Splunk Validated Architectures (SVAs) [found here]  The document states in the Introduction that there is a tool called "Interactive Splunk Validated Architecture (iSVA)", and in there is a link to it (https://sva.splunk.com), but this link redirects me to the pdf itself!  I have searched for the tool on google but with no luck, now I'm just wondering what happened to this tool and where I can find it. I really appreciate any help anyone can provide. Edit: Is there any newer version of the document? because the current one is from January 2021.
Dear all, We set a few alerts and send to one receipt (DL) and alert work fine now. we want to send the alert to different receipt for environment-A and environment-B. the alert search code and ot... See more...
Dear all, We set a few alerts and send to one receipt (DL) and alert work fine now. we want to send the alert to different receipt for environment-A and environment-B. the alert search code and other setup are same for 2 environments. Is there any solution for this?