All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

Removing FQDN from field values Hi all, can anyone help me with framing the SPL query for the below requirement. I have a field named Host which contains multiple values. some of them includes FQDN... See more...
Removing FQDN from field values Hi all, can anyone help me with framing the SPL query for the below requirement. I have a field named Host which contains multiple values. some of them includes FQDN in various format at the end of the hostname. eg: Host (value1.corp.abc.com, value2.abc.com,  value3.corp.abc, value4.xyz.com,  value5.klm.corp, value6.internal, value7.compute.internal, etc...) In this, I need to get Host value as (value1, value2, value3, value4, value5, value6, value7) in my result by removing all types of FQDN. Please can you help. Thanks in advance.  
Hi Team, I am connecting any point studio with Splunk using HEC. The logs are forwarding but some of the logs are missing in Splunk but it is presented in Any point studio logs. How to troubleshoot ... See more...
Hi Team, I am connecting any point studio with Splunk using HEC. The logs are forwarding but some of the logs are missing in Splunk but it is presented in Any point studio logs. How to troubleshoot the issue?   Thanks, Karthi
Having the same 403 forbidden error, it redirects to https://www.splunk.com/login/saml2/sso/okta after entering username and password to login. Works fine with different computer located in diffe... See more...
Having the same 403 forbidden error, it redirects to https://www.splunk.com/login/saml2/sso/okta after entering username and password to login. Works fine with different computer located in different network region. But in my local PC the error persists after rebooting, clearing browser cache, using different browsers. It was working fine 2 days earlier in this PC.
Finding something that is not there is not Splunk's strong suit.  See this blog entry for a good write-up on it. https://www.duanewaddle.com/proving-a-negative/
The answer depends on what you intend to test, but you should be able to treat the data as frozen and copy the buckets from all 3 indexers into the thawed directory to a standalone indexer.  See http... See more...
The answer depends on what you intend to test, but you should be able to treat the data as frozen and copy the buckets from all 3 indexers into the thawed directory to a standalone indexer.  See https://docs.splunk.com/Documentation/Splunk/9.2.1/Indexer/Restorearchiveddata for how to thaw data.
My issue is solved I manually copied the files from colddb on the recovered node to the colddb location on the production nodes. (enable maintenance-mode, stop splunk on receiving node, copy  files... See more...
My issue is solved I manually copied the files from colddb on the recovered node to the colddb location on the production nodes. (enable maintenance-mode, stop splunk on receiving node, copy  files, make sure ownership is correct, start splunk, disable maintenance-mode) The recovered node is currently still in the cluster, because removing it would fill up the remaining indexers a bit too much.. which would lead to data loss again When we added some additional RHEL9 peer nodes we will remove the recovered node and life will be good again. Thanks for the tips and clearifying info.
Please find below I have done cim compliance for the Bit9 security Platform. This will 90 % will be covering if any one can find more post on this. props.conf [bit9] EVAL-dest_nt_domain = mvindex(s... See more...
Please find below I have done cim compliance for the Bit9 security Platform. This will 90 % will be covering if any one can find more post on this. props.conf [bit9] EVAL-dest_nt_domain = mvindex(split(HostName, "\\"),0) EVAL-date = strftime(_time,"%Y-%m-%d %H:%M:%S") EVAL-vendor_product="bit9 carbon black" EVAL-action=case(like(EventSubType,"%change%"),"modified",like(EventSubType,"%delet%"),"deleted",like(EventSubType,"%,modif%"),"modified",like(EventSubType,"%create%"),"created",like(EventSubType,"%fail%"),"failure",like(EventSubType,"%succe%"),"success",like(EventSubType,"%rest%"),"restarted",like(EventSubType,"%shutdown%"),"shutdown",like(EventSubType,"%start%"),"started",like(EventSubType,"%reset%"),"modified",like(EventSubType,"%login%"),"success",like(EventSubType,"%logout%"),"logoff",like(EventSubType,"%attach%"),"created",like(EventSubType,"%detach%"),"deleted",like(EventSubType,"%upgrade%"),"upgraded",like(EventSubType,"%nstall%"),"created",like(EventSubType,"%uninstall%"),"deleted",like(EventSubType,"%finish%"),"success",like(EventSubType,"%close%"),"success",like(EventSubType,"%logout%"),"logoff",like(EventSubType,"%set%"),"modified",like(EventSubType,"%allow%"),"allowed",like(EventSubType,"%block%"),"blocked",like(EventSubType,"%download%"),"created",like(EventSubType,"%detect%"),"allowed",like(EventSubType,"%found%"),"allowed",like(EventSubType,"%discover%"),"created",like(EventSubType,"%error%"),"error",like(EventSubType,"%writ%"),"allowed",like(EventSubType,"%execut%"),"success",like(EventSubType,"%lost%"),"failure",like(EventSubType,"%add%"),"created",like(EventSubType,"%approve%"),"success",like(EventSubType,"%New%"),"allowed",like(EventSubType,"%update%"),"updated",like(EventSubType,"%upload%"),"created",like(EventSubType,"%clone%"),"created",like(EventSubType,"%regis%"),"created",like(EventSubType,"%New unapproved%"),"allowed",OpType=="0","created",OpType=="1","deleted",OpType=="9","created", OpType=="6","started",OpType=="5","modified",OpType=="2","modified",OpType=="12","deleted",OpType=="7","success",OpType=="11","modified",1==1,success) FIELDALIAS-filepathname =PathName as file_path FIELDALIAS-severity=priority as severity FIELDALIAS-signature=EventSubType as signature FIELDALIAS-signature_id=EventSubTypeId as signature_id FIELDALIAS-src=src_ip as src FIELDALIAS-src_user= UserName as src_user FIELDALIAS-user= UserName as user FIELDALIAS-category=EventType as Category FIELDALIAS-description=OpDescription as description FIELDALIAS-originalfilname=FileName as original_file_name FIELDALIAS-process=ProcessPath as process FIELDALIAS-process_path=ProcessPath as process_path FIELDALIAS-Process_hash=ProcessHash as process_hash FIELDALIAS-process_name=ProcessFileName as process_name FIELDALIAS-process_id=ProcessKey as process_id FIELDALIAS-command=CommandLine as command FIELDALIAS-object=Policy as object FIELDALIAS-object_id=PolicyId as object_id FIELDALIAS-object_category=Platform as object_category FIELDALIAS-result=EventSubType as result FIELDALIAS-dvc=dvc_ip as dvc #####tags.conf [eventtype=bit9_malware] malware = enabled attack = enabled [eventtype=bit9_event] endpoint = enabled filesystem = enabled [eventtype=bit9_filesOnComputers] endpoint = enabled filesystem = enabled [eventtype=bit9_event_change] endpoint = enabled change = enabled [eventtype=bit9_event_authentication] authentication = enabled success = enabled #### eventtypes.conf [bit9_fileCatalog] search = index=$index_name$ sourcetype=bit9 source=*Metadata* [bit9_filesOnComputers] search = index=$index_name$ sourcetype=bit9 source=*NetTrace* [bit9_event] search = index=$index_name$ sourcetype=bit9 source=*Event* [bit9_malware] search= index=$index_name$ sourcetype=bit9 source=*Event* (EventSubType="Potential risk file detected" OR EventSubType="Malicious file detected") [bit9_event_change] search = index=$index_name$ sourcetype=bit9 source=*Event* (action!=allowed OR action!=blocked) OR (EventSubType!="*login*" OR EventSubType!="*logout*") [bit9_event_authentication] search = index=$index_name$ sourcetype=bit9 source=*Event* (EventSubType="*login*" OR EventSubType="*logout*")  
I am trying to write a splunk query. I have asset inventory data with hostname and IP address(multivalued), one hostname will have multiple IP address. And I have indexed data in Splunk with a field ... See more...
I am trying to write a splunk query. I have asset inventory data with hostname and IP address(multivalued), one hostname will have multiple IP address. And I have indexed data in Splunk with a field called Hostname(this is mix of hostname and IP addresses of some random assets).  Now I need to compare the asset inventory data with the indexed data,  and the output should be hostname & IP address that is not present in the indexed data. Sample data -  index=asset_inventory | table hostname IPaddress output hostname IPaddress abc 0.0.0.0 abc 2.2.2.2 abc 3.3.3.3 def 1.1.1.1 xyz 4.5.6.7 Indexed data -  index=indexed_data | stats count by Reporting_Host Reporting_Host 3.3.3.3 def Expected output - Host_not_present xyz Can someone help with with a Splunk query to get desired output.
Did the scheduled search run successfully? Can you rerun the search with the relevant timeframe and simple add the missing events to the summary index? Note, I did a BSides talk on Summary indexing... See more...
Did the scheduled search run successfully? Can you rerun the search with the relevant timeframe and simple add the missing events to the summary index? Note, I did a BSides talk on Summary indexing Idempotency which you might find useful - it is available on YouTube here
Assuming msgType extracts values such as SAP and TMD, try something like this | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP OR TMD |timechart count by msgType
Hi @frobinson_splun , I have the same requirement to provide  link to one of the tabs in a dashboard within an alert email. The tab name is dynamic. I noticed that the mentioned documentation links ... See more...
Hi @frobinson_splun , I have the same requirement to provide  link to one of the tabs in a dashboard within an alert email. The tab name is dynamic. I noticed that the mentioned documentation links are expired. Could you please provide me with the latest documentation links? Thanks in advance.
Hi,   I use collect for to create a summary about VPN login and logout events. This worked fine but on last week I have 24 hours of logout events missing. Meanwhile the summary of login events were... See more...
Hi,   I use collect for to create a summary about VPN login and logout events. This worked fine but on last week I have 24 hours of logout events missing. Meanwhile the summary of login events were created. I checked the search without the collect command and it gives the correct output. I tried it with a test index and it worked too. But when I run the search for the missing timeframe nothing appears in the destination index. Do you have any advice what else could I check?   Thanks
Hi  I am trying to get visualization in a way when i select multiple categories i see multiple lines in chart and when single value selected i need to get single line but unable to create chart whe... See more...
Hi  I am trying to get visualization in a way when i select multiple categories i see multiple lines in chart and when single value selected i need to get single line but unable to create chart when passing multiple values in categories Please help  | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP OR TMD |timechart count by SAP or TMD Expecting result like below but this query gives error | rex field=message "MessageTemplate\\\":\\\"(?<msgType>[^\\\"]+)" | spath | search SAP | timechart count . This is coming fine as I have timechart given by count Thanks in advance Ashima
HI I have a cluster(3 indexers) with data and I want to copy one index "logs_Test" data to a single install for testing. Can I copy it from the back end on all 3 and bring them together, I feel thi... See more...
HI I have a cluster(3 indexers) with data and I want to copy one index "logs_Test" data to a single install for testing. Can I copy it from the back end on all 3 and bring them together, I feel this won't work. Can I export it from the Search head to a new index and then move that? Any ideas would be great  Thanks in advance Robbie
Hi @ajitshukla61116  My requirement is similar to yours: I need to include a link to a dashboard within an alert email. Have you made any progress on this use case? If so, your insights would be rea... See more...
Hi @ajitshukla61116  My requirement is similar to yours: I need to include a link to a dashboard within an alert email. Have you made any progress on this use case? If so, your insights would be really helpful. Thanks in advance.  
It does. I understand high level what its doing but will need to walk through the specifics although It does get me where I needed to be. Here is what I ended up with: index=anIndex sourcetype=aSo... See more...
It does. I understand high level what its doing but will need to walk through the specifics although It does get me where I needed to be. Here is what I ended up with: index=anIndex sourcetype=aSourceType aString1 earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), aString1Count=mvappend(aString1Count, zone) ] | stats count by aString1Count | transpose 8 header_field=aString1Count | rename 0 AS "string1Window1", 1 AS "string1Window2", 2 AS "string1Window3", 3 AS "string1Window4", 4 AS "string1Window5", 5 AS "string1Window6", 6 AS "string1Window7", 7 AS "string1Window8" | appendcols [search index=anIndex sourcetype=aSourceType aString2 earliest=-481m@m latest=-1m@m | eval age=now() - _time | eval age_ranges=split("1,6,11,31,61,91,121,241",",") | foreach 0 1 2 3 4 5 6 7 [ eval r=tonumber(mvindex(age_ranges, <<FIELD>>))*60, zone=if(age < 14400 + r AND age > r, <<FIELD>>, null()), aString2Count=mvappend(aString2Count, zone) ] | stats count by aString2Count | transpose 8 header_field=aString2Count | rename 0 AS "string2Window1", 1 AS "string2Window2", 2 AS "string2Window3", 3 AS "string2Window4", 4 AS "string2Window5", 5 AS "string2Window6", 6 AS "string2Window7", 7 AS "string2Window8" ] | table string1Window* string2Window* string1Window1 string1Window2 string1Window3 ... 44 42 40 ...
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on loc... See more...
We are using a clustered index environment and want to use NAS as our cold storage.   I mapped NAS to a local folder for linux to be accessable by splunk. I can see the folders to be mapped on local linux device. But when i change the configuration on cluster master to use this nas as cold data and push the configuration, splunk some time hang up and stops, even if i restart it it would not work. Has anyone tried nas as cold storage and can share his fstab and indexex.conf that would be great!
Hi Team, Full logs are not loading in Splunk for windows server.  Any suggestions to check.
Hello Splunk team, I was troubleshooting one query with anomalydetection command (https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Anomalydetection), and one thing came to m... See more...
Hello Splunk team, I was troubleshooting one query with anomalydetection command (https://docs.splunk.com/Documentation/SplunkCloud/9.1.2312/SearchReference/Anomalydetection), and one thing came to my attention. While using action=filter I'm still seeing events with probable_cause_freq=1.0000 and log_event_prob=0.000 Should that actually happen? is log_event_prob=0.000 a threshold ? it's not an issue for me to filter it, i just wanted to double check if that is expected behaviour, as i couldn't find it in the documentation. Thanks!