All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

hello, trying to capture DNS log traffic from an Active Directory Domain Controller. the topology is this: cloud splunk instance, heavy forwarder on my LAN and universal forwarder on the DC. i  s... See more...
hello, trying to capture DNS log traffic from an Active Directory Domain Controller. the topology is this: cloud splunk instance, heavy forwarder on my LAN and universal forwarder on the DC. i  see multiple Stream apps in the splunk store - which app goes where? There is "Splunk App for Stream" then there's "Splunk Add-on for Stream Forwarders" then there's something called "Splunk Add-on for Stream Wire Data" - can you please help?
Splunk UF Hi folks, Seeking help I am new to splunk I am trying to configure splunk UF, I have two vm's both vm's installed windows 10 however both vm's are communicating with each other in one VM... See more...
Splunk UF Hi folks, Seeking help I am new to splunk I am trying to configure splunk UF, I have two vm's both vm's installed windows 10 however both vm's are communicating with each other in one VM I installed Splunk enterprise and in another VM installed Splunk UF.. Assuming splunk enterprise VM is receiver and splunk UF VM is forwarder so I assigned splunk UF VM IP into splunk enterprise VM as a forwader with port 9997 ex: xx.xx.xxx:9997 Still not receiving any logs from UF vm I would like to know that procedure I am doing is it correct Would be appreciate your kind support Thanks in advance..
Hi, My query:   |tstats count where index=app-clietapp host=ahnbghjk OR host=ncsjnjsnjn sourcetype=app-clientapp source=/opt/splunk/var/clientapp/application.log by PREFIX(status:) |rename statu... See more...
Hi, My query:   |tstats count where index=app-clietapp host=ahnbghjk OR host=ncsjnjsnjn sourcetype=app-clientapp source=/opt/splunk/var/clientapp/application.log by PREFIX(status:) |rename status: as App_Status |where isnotnull(App_Status) |eval Sucess=if(App_Status="0" OR App_Status="", "Succ", null()) |eval Error=if(App_Status!="0", "Error", null())   output: App_Status count   Error Sucess 0 767890   Succ 6789 65 Error     But i want the output as shown below: App_Status Error Sucess 6789 65 767890   please let me know how to modify the query so that i can get the required output.
Hi Community, I am testing Splunk dashboard performance with Selenium IDE. I am able to get the desired results but the only problem I am facing is with Splunk logging out to complete the testing... See more...
Hi Community, I am testing Splunk dashboard performance with Selenium IDE. I am able to get the desired results but the only problem I am facing is with Splunk logging out to complete the testing. Is there some point of contact who can help me with the issue or some other ideas would also work if it helps me with Splunk logout from external applications? Regards, Pravin
Hi Friends,  I am upgrading my Splunk Enterprise from 7.1 to 8.1. after upgrading Indexer search head Peer server it is not reflecting under indexer cluster group. there i can see only Master node.... See more...
Hi Friends,  I am upgrading my Splunk Enterprise from 7.1 to 8.1. after upgrading Indexer search head Peer server it is not reflecting under indexer cluster group. there i can see only Master node. do i need to follow any certain steps to upgrade such clusters ?
Environment - single splunk enterprise instance (v. 8.2.6) running on a RHEL 6.1 server, receiving data from  multiple forwarders. Issue - License volume has always shown as 30GB/day, for the past ... See more...
Environment - single splunk enterprise instance (v. 8.2.6) running on a RHEL 6.1 server, receiving data from  multiple forwarders. Issue - License volume has always shown as 30GB/day, for the past few years anyway. Found out today that the last license purchased (January 2022) was for 50GB/Day, but the license page is still showing 30G/day. How do I get the correct volume showing for our licensing? I would have thought it would have been automatic, or part of the license install process.
Hi all, I have to extract sourcetype as field in Dashboard. There are multiple sourcetype like  : oracle:audit:json, oracle:audit:json11,oracle:audit:json12,sourcetype=oracle:audit:sql11,sourcety... See more...
Hi all, I have to extract sourcetype as field in Dashboard. There are multiple sourcetype like  : oracle:audit:json, oracle:audit:json11,oracle:audit:json12,sourcetype=oracle:audit:sql11,sourcetype=oracle:audit:sql12 I have written regex : rex mode=sed field=sourcetype "s/oracle:audit:(.*)\d\d/\1/g" Its working fine for all the sourcetype :  oracle:audit:json11,oracle:audit:json12,sourcetype=oracle:audit:sql11,sourcetype=oracle:audit:sql12 But when the data is coming with oracle:audit:json, its not giving the result in Dashboard. Main search query is giving result. macros definition : definition = (sourcetype=oracle:audit:json OR sourcetype=oracle:audit:json11 OR sourcetype=oracle:audit:json12 OR sourcetype=oracle:audit:sysaud OR sourcetype=oracle:audit:sysaud11 OR sourcetype=oracle:audit:sysaud12 OR sourcetype=oracle:audit:sql11 OR sourcetype=oracle:audit:sql12) I have written macros also where i have passed all the sourcetype but getting no result or partial result in dashboard for sourcetype  oracle:audit:json  
I just came to the realization that this query shows "missing" when it's either missing in Splunk or exists in Splunk but not in the export: index=_internal | fields host | dedup host | eval ... See more...
I just came to the realization that this query shows "missing" when it's either missing in Splunk or exists in Splunk but not in the export: index=_internal | fields host | dedup host | eval host=lower(host) | append [ | inputlookup Export.csv | rename Hostname as host | eval host=lower(host)] | stats count by host | eval count=count-1 | eval Status=if(count=0,"Missing","OK") | sort Status | table host Status What I would like is to change the query to show where it's missing.
Hi all, is there a limitation in the combination of transforms on a source in props.conf? here is what i did and somehow I don't get any result. Whenever I delete the TRANSFORMS-reroute entry, ... See more...
Hi all, is there a limitation in the combination of transforms on a source in props.conf? here is what i did and somehow I don't get any result. Whenever I delete the TRANSFORMS-reroute entry, data is received and hostnames are changed. Somehow I don't get the source with my regex rerouted to another index. props.conf:   [source::tcp:514] TRUNCATE = 64000 TRANSFORMS = newhost1 TRANSFORMS = newhost2 TRANSFORMS-reroute=set-index   transforms.conf   [newhost1] DEST_KEY = MetaData:Host REGEX = mymatchinghost1rex FORMAT = host::myhost1 [newhost2] DEST_KEY = MetaData:Host REGEX = mymatchinghost2rex FORMAT = host::myhost2 [set-index] DEST_KEY=_MetaData:Index REGEX= .+mymatchingrex.+ FORMAT=myindex WRITE_META=true     thanks for your help, kind regards, harald
How can I write a query like following?  index=my_app | eval userError="Error while fetching User" | eval addressError = "Did not find address of user" | stats count(userError) as totalUserErro... See more...
How can I write a query like following?  index=my_app | eval userError="Error while fetching User" | eval addressError = "Did not find address of user" | stats count(userError) as totalUserErrors, count(addressError) as totalAddressErrors Expected output:  Error while fetching User 50 Did not find address of user 30
Hi, i'm trying to create a set of playbooks to unit test other playbooks.   is it possible to run a playbook without providing an event id?   thanks.   Ansir
Hey, I have a Splunk Enterprise environment with servers cluster of 4 SHDs, 5 HFDs and 3 Indexers. In addition there is a number of alerts that are configured on my Search Heads, the alerts use the... See more...
Hey, I have a Splunk Enterprise environment with servers cluster of 4 SHDs, 5 HFDs and 3 Indexers. In addition there is a number of alerts that are configured on my Search Heads, the alerts use the 'collect' command which indexes the returned events from the query to some index. For example: index=Example ... | collect index=production It's worked for some time, approximately 6 months. But now, when I try to search for events on index "production", I get 0 events. I searched for errors and bugs with the support of a Splunk specialist, but we didn't find a solution. One speculation that we had was the 'stashParsing' queue which configured on the SHDs and used by the 'collect' command. We found on the '_internal' index logs about the queue 'max_size=500KB' and 'current_size'. The 'current_size' values were 0 99.9% of the time and 494, 449, 320, 256 0.001% of the time on the last 30 days. I have tried increasing the 'max_size' of the queue I have created a file named 'server.conf' in the following location: $SPLUNK_HOME/etc/shcluster/apps/shd_base. The file content is: [stashparsing] maxsize=600MB I have distributed this to the SHDs cluster, but it did not seem to have any effect. Splunk version: 8.1.3 Linux version: Red Hat Linux Enterprise 7.8 This is an air-gapped environment so I cannot attach any logs or data.
Resourceinitializationerror: failed to validate logger args: Options "https://prd-p-88jca.splunkcloud.com:8088/services/collector/event/1.0": dial tcp 52.203.227.66:8088: connect: connection timed ou... See more...
Resourceinitializationerror: failed to validate logger args: Options "https://prd-p-88jca.splunkcloud.com:8088/services/collector/event/1.0": dial tcp 52.203.227.66:8088: connect: connection timed out : exit status 1  1. I follow this link https://www.splunk.com/en_us/blog/platform/docker-amazon-ecs-splunk-how-they-now-all-seamlessly-work-together.html    >--- for task logs  2. https://bobcares.com/blog/use-splunk-log-driver-with-ecs-task-on-fargate/     >--this one also     please give your answer to this post 
Hi all, Could some please help me with this query. I have 3 different sources from which i want to match the fields. I am  Source A contains K_user, ID Source B contains RECID, USER_RESET Sou... See more...
Hi all, Could some please help me with this query. I have 3 different sources from which i want to match the fields. I am  Source A contains K_user, ID Source B contains RECID, USER_RESET Source C contains USER, NAME i have do the query in 2 steps 1.  Join A and B using ID. (RECID=ID) and get USER_RESET 2. Join the result from step 1 with C. Match K_USER and USER_RESET to get the name from source C.  if i explain using example Source A K_USER  ID ABN        1 XYZ          2   Source B RECID   USER_RESET 1.            MNP 3.             IJK   SOURCE C USER  NAME ABN  John XYZ   Mary MNP Philip IJK  Cathy   Final result should look like K_USER | ID | USER_RESET | NAME(K_USER) | NAME(USER_RESET) ABN 1 MNP John Phillip Can i achieve  this without using join. Thanks in Advance!!
Hi, I created a Splunk app with React but when I symlinked into Splunk's application directory, I receive the expected message as follows: BUT .... even after restarting my Splunk instance, the ... See more...
Hi, I created a Splunk app with React but when I symlinked into Splunk's application directory, I receive the expected message as follows: BUT .... even after restarting my Splunk instance, the Splunk app does not appear on my list of apps on my Splunk instance. Why would this be and how can I fix this? Many thanks.
Hiya,  I am trying to use the ITSI REST API to update entities in my splunk development. I am able to create entities and overwrite pre existing entities which have the same `_key` value.  But I... See more...
Hiya,  I am trying to use the ITSI REST API to update entities in my splunk development. I am able to create entities and overwrite pre existing entities which have the same `_key` value.  But I want to just update an entity without overwriting/deleting any data. So say I have an entity with 2 info fields: "cpu_cores = 4" "memory = 32" and I just want to update the "cpu_cores" field to be "cpu_cores = 8" and leave the "memory" field the same, but whenever I execute the post request it overwrites the info fields and deletes the "memory" Field. Below is the Endpoint I am using and the JSON object to update the entity: Endpoint https://<my_ip>:8089/servicesNS/nobody/itsi/itoa_interface/entity/bulk_update/?is_partial_data=1 JSON Object     [ { "_key":"aa_entity_1", "title": "aa entity 1", "object_type": "entity", "description": "Just a test", "informational": { "fields": [ "cpu_cores" ], "values": [ "8" ] }, "cpu_cores": [ "8" ] } ]       This JSON creates the entity fine, and what I understand from the documentation is that the "is_partial_data=1" should mean that it will only update data and not remove any, I have looked around and tried different things with the "is_partial_data=1". I've tried putting it into the  JSON Object as "is_partial_data":true, In the endpoint I saw somewhere that the "/" shouldn't be present before the "?is_partial_data=1", but this didnt work either. Any help would be appreciated. Addtional Info: Splunk Enterprise: 9.0.3 ITSI Version: 4.13.2 Using Postman to do post request
Hi Team, I am looking for the help to send Report.  I have a scheduled report which is running every hour. can you please advise with search query. if I create new alert and  if alert trigge... See more...
Hi Team, I am looking for the help to send Report.  I have a scheduled report which is running every hour. can you please advise with search query. if I create new alert and  if alert trigger, scheduled report should be sent to recipients. I am aware about the CSV/ PDF attached. looking for something like to send scheduled report as result for notification if alert triggered .
I have configuration  for TA-MS-AAD and we see that we have delays  trying to understand how _time is set 
Hello dear community Can you please advise me. My team is complaining that not all data comes from the HEC token from Kubernetes. I don't see any errors in _internal at this index. But I notice... See more...
Hello dear community Can you please advise me. My team is complaining that not all data comes from the HEC token from Kubernetes. I don't see any errors in _internal at this index. But I noticed something interesting in the index="_introspection" there are breaks in the data. Could this be related? And how to fix it?  
hai All, i have events like below  from how can i filter events if for ex: 6th character in C*E**M  IS M want to filter all OR 6th character is H how can i filter all those please assist C*E*... See more...
hai All, i have events like below  from how can i filter events if for ex: 6th character in C*E**M  IS M want to filter all OR 6th character is H how can i filter all those please assist C*E**M****} JAWS Process to copy the legacy Virtu ORDERDETAILSs data from IMFT to network folder C*E**M****} JAWS Process to copy the legacy Virtu Orders data from IMFT to network folder C*E**M****} box that contains the processes to load Portware EOD files to APP_ETT database C*E**M****} box that load the OMS legacy tables 1 11.111% C*E3VL****} Box that contains the jobs to download and process the ITG Placement Inbound file C*E**H****}ox that contains the processes t