All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi Has anyone installed the "Add-on for Cloudflare data" app, i just after some documentation on how it is supposed work and the setup process ?
_raw= line 1 line 2 line 3 line 4 line 5 line 6 how to define another new field "copyofraw"  to contain just line 5 and line 6
Hi, How do I extract word "Dev" from below file location source=/test1/folder1/scripts/monitor/log/env/dev/Error.log and add some if condition statements like if word=dev,change it to development ... See more...
Hi, How do I extract word "Dev" from below file location source=/test1/folder1/scripts/monitor/log/env/dev/Error.log and add some if condition statements like if word=dev,change it to development word=test,change it to loadtest in splunk query.   Thanks    
I am new to administrating Splunk Enterprise Server. I'm guessing the answer is obvious to some, but I'm getting confused trying to figure out a solution from the documentation. We are using Splunk ... See more...
I am new to administrating Splunk Enterprise Server. I'm guessing the answer is obvious to some, but I'm getting confused trying to figure out a solution from the documentation. We are using Splunk Enterprise Server v 9.2.1 stand-alone on an isolated network. We primarily collect and report on multiple systems' audit logging.  The server is set up and I can see ingested logs arriving and create reports on the data. But I need one more thing. I must archive all the original data exactly as it is received on the TCP receiver and copy it to offline storage for safe keeping. I need to be able to re-ingest the raw data at some future date, but that seems pretty straightforward. How can I do this?  Is there some way I can grab the data being received on my TCP port listener in RAW form or some magic I need to do with some indexer or forwarder->Receiver string?   I'm sure I'm not the first person to need this... How do others accomplish this stuff? Thank You!
Can I change the default message in the Alert Trigger "Send Email" ? I have been looking around and cant find anything where I could change this. My goal is to create a template message so we can str... See more...
Can I change the default message in the Alert Trigger "Send Email" ? I have been looking around and cant find anything where I could change this. My goal is to create a template message so we can stream light our alert messages. Any help would be great !    
in raw data I have portion that I would like to use in report. "changes":{"description":{"before":"<some text or empty>","after":"<some text or empty>"}}   I created  rex summary= "change... See more...
in raw data I have portion that I would like to use in report. "changes":{"description":{"before":"<some text or empty>","after":"<some text or empty>"}}   I created  rex summary= "changes":\{"description":\{"before":"<some text or empty>","after":"<some text or empty>"\}\})" But it doesn't work. Please advise
In my index I don't see all the logs being forwarder by the Splunk UF. How can monitor when event is drop from event queue on the Spunk UF. Can I monitor this in Splunk Deployment server?
We are in the midst of a migration from physical servers to virtual servers, and we wonder if stopping Splunk is mandatory in order to perform the Cold Data migration or if there’s a workaround to th... See more...
We are in the midst of a migration from physical servers to virtual servers, and we wonder if stopping Splunk is mandatory in order to perform the Cold Data migration or if there’s a workaround to this and this can be safely done without stopping Splunk.
Hello, I want to use SOAR with Splunk Enterprise. The two work together so that I do not buy Splunk ES. Therefore, I want the process to be automatic. I take data from SplunkEnterprise to the soar, a... See more...
Hello, I want to use SOAR with Splunk Enterprise. The two work together so that I do not buy Splunk ES. Therefore, I want the process to be automatic. I take data from SplunkEnterprise to the soar, and the soar performs the actin processes. How is this done? Note: I was using splunk ES, but the process is cumbersome on the one hand. Resources
I'm working with a field named Match_Details.match.properties.user.  It contains domain\user information that I'm trying to split into domain and user.  I can't use EXTRACT in props.conf because of t... See more...
I'm working with a field named Match_Details.match.properties.user.  It contains domain\user information that I'm trying to split into domain and user.  I can't use EXTRACT in props.conf because of this restriction. EXTRACT-<class> = [<regex>|<regex> in <src_field>] NOTE: <src_field> has the following restrictions: * It can only contain alphanumeric characters and underscore (a-z, A-Z, 0-9, and _). Is this also true with REPORT in transforms.conf?  I can't find any documentation that tells me. TIA, Joe
I want to get an alert when there is switch between events for the first time. Below is the example for this.  index=abc sourcetype=xyz <warning> index=abc sourcetype=xyz <critical> The... See more...
I want to get an alert when there is switch between events for the first time. Below is the example for this.  index=abc sourcetype=xyz <warning> index=abc sourcetype=xyz <critical> These 2 queries I have and I want an alert when there is switch between from <warning> to <critical>. Please help with the query.
We have the "Reassign Knowledge Objects" option via SplunkCloud portal in the settings but is it possible to do it via API ? We need to do this for all KO's owner by a specific user.
This is probably an entry level question.  I have raw data that looks something like this: {"id": 99999, "type": "HOST", "timestamp": "2024-04-29T10:41:39.820Z", "entity": {"ipAddress": "1.1.1.1"}, ... See more...
This is probably an entry level question.  I have raw data that looks something like this: {"id": 99999, "type": "HOST", "timestamp": "2024-04-29T10:41:39.820Z", "entity": {"ipAddress": "1.1.1.1"}, "dataName": "Testing"} If I search for type="HOST" or entity.ipAddress="1.1.1.1" I get this entry in the results, but if I search for dataName="Testing" or even dataName=*, I get nothing.  What is different about this field?
Hello i read many topics on zulu time but i m not able to solde one i have a date in this way 2024-04-29T12:01:15.710Z i just want it  this way YYYY-MM-DD HH:MM:SS. i trie this eval latest_time = ... See more...
Hello i read many topics on zulu time but i m not able to solde one i have a date in this way 2024-04-29T12:01:15.710Z i just want it  this way YYYY-MM-DD HH:MM:SS. i trie this eval latest_time = strptime(latest_time, "%Y-%m-%dT%H:%M:%S.%3N%Z") and the result is that : 1714363262.904000  i really don't catch the proble! Thanks Laurent
In Splunk studio, I have a  Status field with the values DROPPED and NOT DROPPED. If I get Dropped the background color of the card should change into Green... If the field value is NOT DROPPED. It s... See more...
In Splunk studio, I have a  Status field with the values DROPPED and NOT DROPPED. If I get Dropped the background color of the card should change into Green... If the field value is NOT DROPPED. It should be Red. How can I achieve it in splunk studio.      Thanks
My Source is python. In WSDL I have 20 items . While am executing the query in splunk . I am getting all 20 items coming in single event. Though unable to extract the fields and show it's count. How ... See more...
My Source is python. In WSDL I have 20 items . While am executing the query in splunk . I am getting all 20 items coming in single event. Though unable to extract the fields and show it's count. How can i get all 20 items into individual events. How can i achieve it.    Thanks 
Hello and thank you in advance for any insight. I am working on upgrading Splunk Enterprise from 8.2.3.2 to 9.1.4. I have been digging into the release notes and want to double-check I am not missing... See more...
Hello and thank you in advance for any insight. I am working on upgrading Splunk Enterprise from 8.2.3.2 to 9.1.4. I have been digging into the release notes and want to double-check I am not missing anything significant.  Has anyone had any issues upgrading from Splunk Enterprise 8.2.3.2 to 9.1.4? 
Hello,   I'm having problems using roles. I use this search, which gives me results via the admin role. [search index="idx_arv_ach_cas_traces" source="*orange_ach_cas_traces_ac_20*" nom_prenom_... See more...
Hello,   I'm having problems using roles. I use this search, which gives me results via the admin role. [search index="idx_arv_ach_cas_traces" source="*orange_ach_cas_traces_ac_20*" nom_prenom_manager="*" nom_prenom_rdg="*" cuid="*" ttv="*" (LibEDO="*") (LibEDO="*MAROC ANNULATION FIBRE INTERNET" OR LibEDO="*MAROC CTC ET PROSPECT" OR LibEDO="*MAROC CTC HOME" OR LibEDO="*MAROC HORS-PROD" OR LibEDO="*MAROC N1 ACH" OR LibEDO="*MAROC N2 ACH GESTION" OR LibEDO="*MAROC N2 ACH RECLAMATION" OR LibEDO="*MAROC N2 ACH RECOUVREMENT" OR LibEDO="*MAROC RECOUVREMENT SOSH" OR LibEDO="*MAROC GESTION MS") ((lib_origine="Appel Reco" OR "Appel Sortant" OR "BO Récla Recouv" OR "Correspondance Entrante" OR "Correspondance Sortante" OR "Courrier Ent Fidé" OR "Etask") OR (lib_motif="Contact Flash" OR "Contact non tracé" OR "Traiter une demande en N2" OR "Verbatim urgent") OR (lib_resultat="Client Pro" OR "Contact Flash" OR "Contact non tracé")) AND (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") | eventstats sum(total) as "Nbre_de_tracages" by lib_origine | top "Nbre_de_tracages" lib_origine | sort - "Nbre_de_tracages" | head 5 | streamstats count as row_number | search row_number=1 | return lib_origine] nom_prenom_manager="*" nom_prenom_rdg="*" cuid="*" ttv="*" (LibEDO="*") (LibEDO="*MAROC ANNULATION FIBRE INTERNET" OR LibEDO="*MAROC CTC ET PROSPECT" OR LibEDO="*MAROC CTC HOME" OR LibEDO="*MAROC HORS-PROD" OR LibEDO="*MAROC N1 ACH" OR LibEDO="*MAROC N2 ACH GESTION" OR LibEDO="*MAROC N2 ACH RECLAMATION" OR LibEDO="*MAROC N2 ACH RECOUVREMENT" OR LibEDO="*MAROC RECOUVREMENT SOSH" OR LibEDO="*MAROC GESTION MS") ((lib_origine="Appel Reco" OR "Appel Sortant" OR "BO Récla Recouv" OR "Correspondance Entrante" OR "Correspondance Sortante" OR "Courrier Ent Fidé" OR "Etask") OR (lib_motif="Contact Flash" OR "Contact non tracé" OR "Traiter une demande en N2" OR "Verbatim urgent") OR (lib_resultat="Client Pro" OR "Contact Flash" OR "Contact non tracé")) AND (cuid!="AUTOCPAD" AND cuid!="BTORCPAD" AND cuid!="COCOA01" AND cuid!="CRISTORC" AND cuid!="ECARE" AND cuid!="FACADE" AND cuid!="IODA" AND cuid!="MEFIN" AND cuid!="ND" AND cuid!="ORCIP" AND cuid!="ORDRAGEN" AND cuid!="PORTAIL USSD" AND cuid!="RECOU01" AND cuid!="SGZF0000" AND cuid!="SVI" AND cuid!="USAGER PURGE" AND cuid!="VAL01") | stats sum(total) as "nb_tracages" by cuid lib_origine | sort -nb_tracages | head 5 When I use another role, the first part of the search works, but not the second. The search on : nom_prenom_manager="*" , ... doesn't give any results, whereas with the admin role, it does. I can't modify the query because I don't have rights to it, but I have to play with the roles. I'd like to point out that the manager_last_name field is obtained via an automatic lookup. But there's no problem with specific rights for the admin role. I've tried everything, but I can't find a solution, please have an idea.  
Hello there, I'm newbie to splunk and need your help please to forward syslog logs coming to splunk to another third party linux server. I can clearly see on my SH instance, that there are logs with ... See more...
Hello there, I'm newbie to splunk and need your help please to forward syslog logs coming to splunk to another third party linux server. I can clearly see on my SH instance, that there are logs with sourcetype=syslog. but when I use a heavy forwarder to forward these logs I receive nothing. I configured teh receiving of heavy forwarder to listen to 9997. then my sources would send their logs to the HV using 9997. The HF also transmit all he receives to Splunk SH on 9997 and i'm also trying to transmit syslog to third party server. when I configure the outputs with the following configuration for syslog . I receive nothing on my server. [syslog] defaultGroup=syslogGroup [syslog:syslogGroup]   and when I just use [tcpout:custom_group] server = ip:port sendCookedData = false I receive all kind of data and none is tagged with sourcetype. although i can see among them syslog event, but they are not tagged properly.   Please help me out. Thanks in advance
Hello, Splunk community!  I have created a correlation search with the following search string:  index="kali2_over_syslog" ((PWD=/etc AND cmd=*shadow) OR (PWD=* cmd=*/etc/shadow)) OR ((PWD=/... See more...
Hello, Splunk community!  I have created a correlation search with the following search string:  index="kali2_over_syslog" ((PWD=/etc AND cmd=*shadow) OR (PWD=* cmd=*/etc/shadow)) OR ((PWD=/etc AND cmd=*passwd) OR (PWD=* cmd=*/etc/passwd)) | eval time=strftime(_time, "%D %H:%M") | stats values(host) as "host" values(time) as "access time" values(executer) as "user" count by cmd | where 'count'>0 When I use it in search and reporting app and executing "sudo cat /etc/shadow" on the monitored linux machine it catches this event.  The rest of the settings of that correlation search are the same as in my other correlation search, which I used as a template. That another correlation search works well and notable events are getting generated as well as the email notification is sent.  The only difference is that I am not using any datamodel in my search because I have a small test lab and I only have one machine on which I want to monitor the following activity. Can it be that I must use the CIM-validated data models in my search, so that correlation search actually works fine and generates notable events? I am new to Splunk, so I am sorry if my question is a bit unclear or weird, let me know if you need additional information