All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

I have modified the api link of azure, and replaced all of them with the api url of azure China, but I can only collect a part of the data, not all of the data.
apiStartTime and apiEndTime are not set when info=completed but are set when info=granted - try something like this index=_audit action=search provenance=* info=granted host IN (...) (apiStartTime=... See more...
apiStartTime and apiEndTime are not set when info=completed but are set when info=granted - try something like this index=_audit action=search provenance=* info=granted host IN (...) (apiStartTime="ZERO_TIME" OR apiEndTime="ZERO_TIME") | table user, apiStartTime, apiEndTime, search_et, search_lt, search | convert ctime(search_*)  
Hi DavidLi   I didn't realise that after a year you would still reply, thank you so much!
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloa... See more...
Hi Team,    I am using a free trail version of Splunk. and forwarding logs from a Paloalto firewall to splunk. sometimes i am getting logs sometimes not . its seems to be a timeZone issue. my paloalto firewall is in US/Pacific time Zone.  how can I check the Splunk timezone. and how can i configure it same on both the side.  #splunktimeZone
Hi @Taruchit , at first don't use the search command when you cn put all the parameters in the main search. Then I'd avoid to use all time in a search because you could have too many events, but de... See more...
Hi @Taruchit , at first don't use the search command when you cn put all the parameters in the main search. Then I'd avoid to use all time in a search because you could have too many events, but define a useful timerange. index=_audit action=search provenance=* info=completed host IN (...) (apiStartTime="ZERO_TIME" OR apiEndTime="ZERO_TIME") | table user, apiStartTime, apiEndTime, search_,et, search_lt, search | convert ctime(search_*) about the meaning of the results, they dependsa on the parameters you defined, probably with apiEndTime="ZERO_TIME" you don't have the apiStartTime field. Analyze your search and modify it to have the best results for you. Ciao. Giuseppe
| eventstats values(eval(if(status="Issue","Bad",null()))) as Health
Hi @nkavouris , you can use a subsearch to filter results in the main search passing the fields with the same name and putting attention to pass only the fields to use for filtering, in your case: ... See more...
Hi @nkavouris , you can use a subsearch to filter results in the main search passing the fields with the same name and putting attention to pass only the fields to use for filtering, in your case: keystone_time, serial_number, message, after but not model that isn't used in the main search. The problem is the message field because you need to use it as a part of the search, ib this case you have to rename it in "query": search index="june_analytics_logs_prod" [[search index="june_analytics_logs_prod" (message=* new_state: Diagnostic, old_state: Home*) | spath serial output=serial_number | spath message output=message | spath model_number output=model | eval keystone_time=strftime(_time,"%Y-%m-%d %H:%M:%S.%Q"), before=keystone_time-10, after=_time+10, eval latest=strftime(latest,"%Y-%m-%d %H:%M:%S.%Q") | rename message AS query | fields keystone_time serial_number query after ] the renaming of message AS query permits to search in full text search mode. I didn't use it with other fields, only by itself, but it should run. Ciao. Giuseppe
Hi,  this error is normal the script catch errors. All values are good. The thing is, when i ingest these logs, and I set TIME_PREFIX, I have 2 values for timestamp just for one log not the others w... See more...
Hi,  this error is normal the script catch errors. All values are good. The thing is, when i ingest these logs, and I set TIME_PREFIX, I have 2 values for timestamp just for one log not the others whereas they have the same JSON format ... 
Yea, smae same but different.   yesterday i applied this  and it started working too. s/(\\")/"/g   on the data but now i do not see it in the sourcetype advance option, if i add it again the l... See more...
Yea, smae same but different.   yesterday i applied this  and it started working too. s/(\\")/"/g   on the data but now i do not see it in the sourcetype advance option, if i add it again the log quality will ruin again. so not sure how the TA messed up.
Hi @Dyrock , as you can see in https://www.splunk.com/en_us/resources/videos/getting-data-in-with-forwarders.html and read at https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Forwarddata and ... See more...
Hi @Dyrock , as you can see in https://www.splunk.com/en_us/resources/videos/getting-data-in-with-forwarders.html and read at https://docs.splunk.com/Documentation/Splunk/9.3.0/Data/Forwarddata and https://docs.splunk.com/Documentation/SplunkCloud/9.2.2403/Forwarding/Aboutforwardingandreceivingdata You have to: configure the Indexer to receive logs from UFs (I suppose that 997 is a mistyping because the default port is 9997); configure the outputs.conf on your UF to send data to the indexers on the same port. configure the inputs on the UF. At this point you will see your logs in the Indexer. Ciao. Giuseppe
Hi @y71855872 , are you indexing pcap logs from a wireshark, as described in the instructions at https://splunkbase.splunk.com/app/2748 ? Then if you use a custom index, you have to put it in the d... See more...
Hi @y71855872 , are you indexing pcap logs from a wireshark, as described in the instructions at https://splunkbase.splunk.com/app/2748 ? Then if you use a custom index, you have to put it in the default search path or add it to all the dashboards as described in the instructions. Ciao. Giuseppe  
Is there any benefits to move "UNPSEC" back to null()? I usually just gave it "N/A" for string, and 0 for numeric. None whatsoever.  This is purely for people who want non-existent values to sh... See more...
Is there any benefits to move "UNPSEC" back to null()? I usually just gave it "N/A" for string, and 0 for numeric. None whatsoever.  This is purely for people who want non-existent values to show blank.
Yes, Your understanding is correct.
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem ... See more...
I am confused as to how to get this app to work. Can anyone provide me with a instruction sheet telling me what needs to be done? I have downloaded and installed the pcap analyzer app but can't seem to get it to analyze. Can anyone help me?
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss addi... See more...
What happens if the amount of data exceeds the daily limit in Splunk Cloud? 「Total ingest limit of your ingest-based subscription」 ・Data ingestion stops  or ・Splunk contacts you to discuss adding a license, but ingestion does not stop
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added... See more...
Hello, This is my first experience with Splunk as I am setting up a lab. in VirtualBox I have: VM1: Act as server: Ubuntu desktop 24.04 LTS - IP: 192.168.0.33 - Installed Splunk Enterprise - Added port 997 under configure receiving - Added Index, named it Sysmonlog.  VM2: Act as client: Windows 10 IP: 192.168.0.34 - Installed Sysmon - installed Splunk Forwarder - set the developer ip:192.168.0.34 port 8089 - set indexer 192.168.0.33 port 9997. ping result is successful form both VMs When I am about to add the forwarder in my indexer nothing shows up. how should I troubleshoot this to be able to add the forwarder?
Hi @nmohammed  and @goelshruti119 , Please see the following reply for instructions on how to troubleshoot: https://community.splunk.com/t5/Installation/Install-issue-on-Server-2016/m-p/540173/highl... See more...
Hi @nmohammed  and @goelshruti119 , Please see the following reply for instructions on how to troubleshoot: https://community.splunk.com/t5/Installation/Install-issue-on-Server-2016/m-p/540173/highlight/true#... Cheers,    - Jo.  
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to p... See more...
Our vulnerability scan is reporting a critical severity finding affecting several components of Splunk Enterprise related to OpenSSL (1.1.1.x) version that has become EOL/EOS. My researches seem to point out that this version of OpenSSL may not yet be EOS for Splunk due to a purchase of an extended support contract; however, I have been unsuccessful in finding a documentation to support this. Please help provide this information or suggest how this finding can be addressed. Path : /opt/splunk/etc/apps/Splunk_SA_Scientific_Python_linux_x86_64/bin/linux_x86_64/lib/libcrypto.so Installed version : 1.1.1k Security End of Life : September 11, 2023 Time since Security End of Life (Est.) : >= 6 months  Thank you.
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/e... See more...
Situation. Search Cluster - 9.2.2 5 nodes running Enterprise Security version 7.3.2 I'm in the process of adding 5 new nodes to the cluster. Part of my localization involves creating /opt/splunk/etc/system/local/inputs.conf with the following contents. ( the reason I do this is to make sure the host field for forwarded internal logs doesn't contain the FQDN like hostname in server.conf [default] host = <name of this host> When I get to the step where I run: splunk add cluster-member -current_member_uri https://current_member_name:8089 It works, but /opt/splunk/etc/system/local/inputs.conf is replicated from the current_member_name And, if I run something like: splunk set default-hostname <name of this host> ... it modifies inputs.conf on EVERY node of the cluster. Diving into this I believe this is happening because of the Domain Add-On DA-ESS-ThreatIntelligence which contains a server.conf file in it's default directory. (why this would be, I've no idea) contents of /opt/splunk/etc/shcluster/apps/DA-ESS-ThreatIntelligence/default/server.conf on our Cluster Deployer - which is now delivered to all cluster members. [shclustering] conf_replication_include.inputs = true It seems to me that it's this stanza that is causing the issue. Am I on the right track? And why would DA-ESS-ThreatIntelligence be delivered with this particular config? Thank you.
Actually I did what you said. I asked this question to the community to make sure I was doing it right, maybe I was missing something. SOAR is installed on centos 8.5 operating system. I couldn't ins... See more...
Actually I did what you said. I asked this question to the community to make sure I was doing it right, maybe I was missing something. SOAR is installed on centos 8.5 operating system. I couldn't install openvpn on this OS. I rented another virtual machine and installed openvpn on it. VPN machine and SOAR were on different networks again, I peered them over azure. CentOS 8.5 machine and openvpn machine were on the same network. When I connect to VPN from my computer, I can ping the centOS private IP address from my computer and get a response, there is no problem here. But Splunk SOAR still refuses to connect