All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* ac... See more...
index=cim_modactions source=/opt/splunk/var/log/splunk/incident_ticket_creation_modalert.log host=sh* search_name=* source=* sourcetype=modular_alerts:incident_ticket_creation user=* action_mode=* action_status=* search_name=kafka* [| rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time AS "frequency", title AS "title", eai:acl.app AS "app", next_scheduled_time AS "nextRunTime", search AS "query", updated AS "lastUpdated", action.email.to AS "emailTo", action.email.cc AS "emailCC", action.email.subject AS "emailSubject", alert.severity AS "SEV" | eval severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | eval identifierDate=now() | convert ctime(identifierDate) AS identifierDate | table identifierDate title lastUpdated, nextRunTime, emailTo, query, severity, emailTo, actions | fillnull value="" | sort -lastUpdated actions] | table user search_name action_status date_month date_year _time
I'm struggling to get data in from Infoblox using Splunk Add-on for Infoblox.  I looked at the documentation and realized it doesn't support the current versions.  I'm using Infoblox NIOS 9.0.3.  The... See more...
I'm struggling to get data in from Infoblox using Splunk Add-on for Infoblox.  I looked at the documentation and realized it doesn't support the current versions.  I'm using Infoblox NIOS 9.0.3.  The Splunk documentation says it supports Infoblox NIOS 8.4.4, 8.5.2, 8.6.2. Specifically, it's not parsing correctly, and everything goes into sourcetype=infoblox:port. Are there any more current ways to get data in from Infoblox?  Can I get Splunk support to help me since it's a Splunk-supported Add-on?
How do I determine the server setting for my on-premise agent config trying to send data via HTTP from a Windows server to my new cloud instance? 
I want to use the 2nd search as a subsearch only bringing back the actions. How can I do this? SEARCH | rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time A... See more...
I want to use the 2nd search as a subsearch only bringing back the actions. How can I do this? SEARCH | rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time AS "frequency", title AS "title", eai:acl.app AS "app", next_scheduled_time AS "nextRunTime", search AS "query", updated AS "lastUpdated", action.email.to AS "emailTo", action.email.cc AS "emailCC", action.email.subject AS "emailSubject", alert.severity AS "SEV" | eval severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | eval identifierDate=now() | convert ctime(identifierDate) AS identifierDate | table identifierDate title lastUpdated, nextRunTime, emailTo, query, severity, emailTo | fillnull value="" | sort -lastUpdated SUBSEARCH | rest "/servicesNS/-/-/saved/searches" timeout=300 splunk_server=* | search disabled=0 | eval length=len(md5(title)), search_title=if(match(title,"[-\\s_]"),("RMD5" . substr(md5(title),(length - 15))),title), user='eai:acl.owner', "eai:acl.owner"=if(match(user,"[-\\s_]"),rtrim('eai:acl.owner',"="),user), app_name='eai:acl.app', "eai:acl.app"=if(match(app_name,"[-\\s_]"),rtrim('eai:acl.app',"="),app_name), commands=split(search,"|"), ol_cmd=mvindex(commands,mvfind(commands,"outputlookup")), si_cmd=mvindex(commands,mvfind(commands,"collect")) | rex field=ol_cmd "outputlookup (?<ol_tgt_filename>.+)" | rex field=si_cmd "index\\s?=\\s?(?<si_tgt_index>[-_\\w]+)" | eval si_tgt_index=coalesce(si_tgt_index,'action.summary_index._name'), ol_tgt_filename=coalesce(ol_tgt_filename,'action.lookup.filename') | rex field=description mode=sed "s/^\\s+//g" | eval description_short=if(isnotnull(trim(description," ")),substr(description,0,127),""), description_short=if((len(description_short) > 126),(description_short . "..."),description_short), is_alert=if((((alert_comparator != "") AND (alert_threshold != "")) AND (alert_type != "always")),1,0), has_report_action=if((actions != ""),1,0) | fields + app_name, description_short, user, splunk_server, title, search_title, "eai:acl.sharing", "eai:acl.owner", is_scheduled, cron_schedule, max_concurrent, dispatchAs, "dispatch.earliest_time", "dispatch.latest_time", actions, search, si_tgt_index, ol_tgt_filename, is_alert, has_report_action | eval object_type=case((has_report_action == 1),"report_action",(is_alert == 1),"alert",true(),"savedsearch") | where is_alert==1 | eval splunk_default_app = if((app_name=="splunk_archiver" OR app_name=="splunk_monitoring_console" OR app_name="splunk_instrumentation"),1,0) | where splunk_default_app=0 | fields - splunk_server, splunk_default_app | search title=*kafka* | table actions title user
The Monitoring Console uses metrics data provided by servers with a splunk forwarder installed. The metrics data appears to use the hostname found on linux servers in the /etc/hostname file. However,... See more...
The Monitoring Console uses metrics data provided by servers with a splunk forwarder installed. The metrics data appears to use the hostname found on linux servers in the /etc/hostname file. However, our forwarders are set up with a hostname specified in the ../etc/system/local/inputs.conf where a "cname" for the host is specified. This results in a difference between the "host" used in searches and the "hostname" specified in the Monitoring Console dashboards and alerts. Is there a best practice for unifying  the host and hostname in the Monitoring Console?
We want to be able to easily search IOCs in SOAR. The Indicators tab is inconsistent. It shows IOCs for some but not all and when comparing, it isn't clear why one is there and another is not.  ... See more...
We want to be able to easily search IOCs in SOAR. The Indicators tab is inconsistent. It shows IOCs for some but not all and when comparing, it isn't clear why one is there and another is not.  Could there be a way to ensure everything shows up using tagging? If so, how? Thanks!
Trying to get permanent field extraction for a field. Tried to use field extraction tabs in fields given regex there but getting failed. Giving the same in search working. I don't know why. Below is... See more...
Trying to get permanent field extraction for a field. Tried to use field extraction tabs in fields given regex there but getting failed. Giving the same in search working. I don't know why. Below is the event: host: juniper-uat.systems.fed Connection: keep-alive sec-ch-ua-platform: ""Windows"" X-Requested-With: XMLHttpRequest Need to extract the host value as 'fqdn' permanently. given this regex - Host:\s(?<fqdn>(.+))\n in field extraction as attached below: But it is extracting whole event value starting from fqdn value. Not extracting correctly. Please help me in this regard.
Morning everyone, Been having a rough go trying to get some usable web usage reports out of splunk for my Palo Alto traffic.  Specifically trying to do what i think is a semi simple thing.  My test ... See more...
Morning everyone, Been having a rough go trying to get some usable web usage reports out of splunk for my Palo Alto traffic.  Specifically trying to do what i think is a semi simple thing.  My test is going to a website like amazon and then navigating around on the site looking at different products (robotic vacuums in my case).  Then i look at the traffic in splunk which reports back as giving me only say "2 Hits".   Palo reports the following:   I set my policy in palo to log at session start.    My search in splunk is this: index="pan_firewall" log_subtype="url" chris.myers dest_zone="L3-Untrust" url="www.*" user!="*solarwinds*" user!="*service*" user!=unknown http_category!="work-related" http_category!="health-and-medicine" http_category!="government" http_category!="web-advertisements" url!="ad.*" url!="www.abuseipdb.com*" url!="www.userbenchmark.com*" url!="www.xboxab.com*" url!="www.microsoft.com*" url!="www.content.shi.com*" url!="www.shi.com*" url!="www.workday.com*" url!="www.patientfirst.visualstudio.com*" url!="www.malwarebytes.com*" url!="www.globalknowledge.com*" url!="www.jetbrains.com*" url!="www.dnnsoftware.com*" url!="www.juniper.net*" url!="www.intel.com*" url!="www.cpug.org*" url!="www.vmware.com*" url!="www.csirt.org*" url!="ads.*" url!="www.vwc.state.va.us*" url!="www.atlantichealth.org*" url!="www.uhcprovider.com*" url!="www.checkpoint.com*" url!=*rumiview.com* url!="*bing.com*" url!="www.facebook.com/plugins/*" url!="www.codechef.com*" url!="www.splunk.com*" url!="www.aetna.com*" url!="www.radmd.com*" url!="www.humanamilitary.com*" url!="www.myamerigroup.com*" url!="www.providerportal.com*" url!="www.vcuhealth.org*" url!="www.workcomp.virginia.gov*" url!="www.cisco.com*" url!="www.va.gov*" url!="www.wcc.state.md.us*" url!=www.kraken.com* url!="www.medicaid.gov*" url!="www.scc.virginia.gov*" url!="www.dli.pa.gov*" url!="www.maryland.gov*" url!="www.hscrc.state.md.us*" url!="www.msftncsi.com*" url!="*.msftconnecttest.com*" url!="*.msftconnect.com*" url!="*.manageengine.com*" url!="*.ibm.com*" url!="*.paloaltonetworks.com*" url!="www.nowinstock.net*" url!="*.centurylink.com*" url!="*.static-cisco.com*" url!="*.arin.net*" url!="www.facebook.com/connect/*" url!="www.facebook.com/third_party/urlgen_redirector/*" url!="*windstreamonline.com*" url!=*google* dest_hostname!=*fe2.update.microsoft.com dest_hostname!=crl.microsoft.com url!=*windowsupdate* url!="www.telecommandsvc*" url!="www.redditstatic*" url!="www.redditmedia*" url!="www.gravatar.*" dest_hostname!=*icloud.com dest_hostname!=*gstatic.com url!=*.js url!=*.jpg url!=*.png url!=*.gif url!=*.svg url!=*.jpeg url!=*.css | where isnull(referrer) | top limit=25 dest_hostname | rename dest_hostname as URL | table URL, count And my result is this: What am i missing, or what am i not understanding.  I would expect for every page i visit for every vacuum i look at to be 1 hit.  But my understanding has to be wrong as 1, i went and viewed over 15 individual vacuums, so different product urls.  Palo doesn't even seem log it.  I am expecting to see something like this listed,  https://www.amazon.com/Kokaidia-Navigation-Suction-Robotic-Cleaner/dp/B0DFT3B813/?_encoding=UTF8&pd_rd_w=7SKOA&content-id=amzn1.sym.7768b967-1ffc-4782-be79-a66c5b1b9899&pf_rd_p=7768b967-1ffc-4782-be79-a66c5b1b9899&pf_rd_r=KFZVRKBXWB2AZG55ENKY&pd_rd_wg=CjTy7&pd_rd_r=25c15f6d-78b3-46dc-8484-285dfeef98e2&ref_=pd_hp_d_atf_dealz_cs   I also looked at our Palo Alto application that is installed in splunk, but it is just throwing a java script error and providing no data output so i have to visit that later.  So not even trying to pull that into the conversation unless someone were to say, that is how i should be looking at it and my search queries are the problem.   I know someone has experience with this and welcome any and all input.  I am banging my head against the wall and open to anything.
Till date, we have seperate dashboards for seperate application teams. Now the ask is to create a common dashboard for all applications. Is it really possible? We have restricted users via index to ... See more...
Till date, we have seperate dashboards for seperate application teams. Now the ask is to create a common dashboard for all applications. Is it really possible? We have restricted users via index to refrain from other applications. We dont have any app_name specific in logs as well... Only index wise logs are segregated and sourcetype is also same. The log format for all applications is similar.  How can I achieve this? Should I extract app_name from the host we have and keep it in drop-down and involve index as well in drop-down?  Is it really possible? Please help me with your action plan for this.
We are a small Managed Service Provider (MSP) currently testing Splunk with a deployment on a Windows 2019 server using a trial version. After adding a remote device and integrating the Fortinet Fort... See more...
We are a small Managed Service Provider (MSP) currently testing Splunk with a deployment on a Windows 2019 server using a trial version. After adding a remote device and integrating the Fortinet FortiGate App for Splunk, the system functioned well initially. However, the next day, we noticed that the system encountered 5 violations in one night. Subsequently, when accessing the dashboard, we were greeted with the following message: "Error in 'rtlitsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store." Is there a way to resolve this issue without performing a full reinstall? Additionally, is there a way to set a limit on the amount of data being indexed to avoid triggering the violation? I have come across references to routing logs to the "nullQueue" and would appreciate feedback from the community on this approach or any other recommended solutions. Thank you in advance for your help!
Hello, I really need help. Why is the 'Create Server' button in the 'APPSplunk App for SOAR' disabled? After installing this app on the Splunk Searchhead Cluster 9.3.1 through the Deployer, I still d... See more...
Hello, I really need help. Why is the 'Create Server' button in the 'APPSplunk App for SOAR' disabled? After installing this app on the Splunk Searchhead Cluster 9.3.1 through the Deployer, I still don't have a role named 'Phantom'. Please, I would really appreciate a response." Let me know if you need anything else!
Want to extract HIGCommercialAuto  and MLM-RS-H only from below logs in field product name. HIGCommercialAuto higawsaccountid: 463251740121 higawslogstream: app-5091-prod-1-ue1-EctAPI/EctAPI/17eea8... See more...
Want to extract HIGCommercialAuto  and MLM-RS-H only from below logs in field product name. HIGCommercialAuto higawsaccountid: 463251740121 higawslogstream: app-5091-prod-1-ue1-EctAPI/EctAPI/17eea8553cb8434bb4c126047817da16 MLM-RS-H higawsaccountid: 463251740121 higawslogstream: app-5091-prod-1-ue1-EctAPI/EctAPI/17eea8553cb8434bb4c126047817da16 MLM-R3-N higawsaccountid: 463251740121 higawslogstream: app-5091-prod-1-ue1-EctAPI/EctAPI/17eea8553cb8434bb4c126047817da16    
I want to add an endpoint to the webhook allow list . I checked the for that. However, I cannot find "Webhook allow list" under Settings > Server settings. Can someone please help me with this. ... See more...
I want to add an endpoint to the webhook allow list . I checked the for that. However, I cannot find "Webhook allow list" under Settings > Server settings. Can someone please help me with this. where to find this option whether this option is available in Trail version or not ?   if there is any other alternative for this ? Splunk Cloud Version: 9.3.2408.107 Build: b802f6467976 Webhooks Input  Custom Alert Webhook   
failed to start kv store process. see mongod.log and splunkd.log for details.@Splunk
Hi, Some of my events doesn't have an timestamp and its has been written as multiple line items in the log. I want to merge the multiple line items into previous line item Below are the ex... See more...
Hi, Some of my events doesn't have an timestamp and its has been written as multiple line items in the log. I want to merge the multiple line items into previous line item Below are the examples, i want to merge the line items which doesn't have a timestamp, need to add it with previous line  2024-05-24 14:11:51.7212|INFO|Services.Voice.VoiceManager|Wake word detected. hey_mentor 2024-05-24 14:11:51.7212|INFO|Services.Sound.SoundManager|Playing Sound.VoiceStart_TEMP 2024-05-24 14:11:53.9271|INFO|Services.Voice.VoiceManager|Received command Spoken text: hey mentor turn off L E D Intent Name: ChangeImageTransformOnOff Intent Value: turn { OnOff off } { ImageTransformsOnOff L E D } Slot 1: OnOff=off Slot 2: ImageTransformsOnOff=L E D 2024-05-24 14:11:53.9271|INFO|NotificationService|Notify: [Illumination set to Off] 2024-05-24 14:11:59.5010|INFO|Services.Voice.VoiceManager|Wake word detected. hey_mentor 2024-05-24 14:11:59.5010|INFO|Services.Sound.SoundManager|Playing Sound.VoiceStart_TEMP 2024-05-24 14:12:01.8935|INFO|Services.Voice.VoiceManager|Received command Spoken text: hey mentor turn on L E D Intent Name: ChangeImageTransformOnOff Intent Value: turn { OnOff on } { ImageTransformsOnOff L E D } Slot 1: OnOff=on Slot 2: ImageTransformsOnOff=L E D 2024-05-24 14:12:01.8935|INFO|NotificationService|Notify: [Illumination set to On] 2024-05-24 14:12:01.8935|INFO|Services.Sound.SoundManager|Playing Sound.VoiceStop_TEMP 2024-05-24 14:12:06.7081|INFO|Controls.Live.LiveModel|IsReady=True, Pause <<  ------- Could any please help me how to write a query to achieve this.
I want to use an autoencoder model in Splunk for anomaly detection. I have already built my own model, and I did not use a scaler during the process. However, I still encountered the following error.... See more...
I want to use an autoencoder model in Splunk for anomaly detection. I have already built my own model, and I did not use a scaler during the process. However, I still encountered the following error. Here is my code:   I want to check the fields returned by my func in the search bar. What syntax can I use to verify this? this is my python code       def apply(model, df, param): X = df[param['feature_variables']].copy() # 1. 類型轉換 X = X.replace({True: 1, False: 0}) # 2. 處理特殊字符/缺失值 X = X.apply(pd.to_numeric, errors='coerce') # 將無法轉換的值設為NaN X = X.fillna(0) # 3. 類型統一 X = X.astype('float32').values """ 应用模型进行异常检测(无标准化) """ # X = df[param['feature_variables']].values # 重建预测 X_reconstructed = model.predict(X) # 计算重建误差 reconstruction_errors = np.mean(np.square(X - X_reconstructed), axis=1) # 异常阈值设置 threshold_percentile = param.get('options', {}).get('params', {}).get('threshold_percentile', 95) threshold = np.percentile(reconstruction_errors, threshold_percentile) # 构建结果 df_result = df.copy() df_result['reconstruction_error'] = reconstruction_errors filtered_errors_1 = df_result.loc[df_result['is_work'] == 1, 'reconstruction_error'] filtered_errors_0 = df_result.loc[df_result['is_work'] == 0, 'reconstruction_error'] threshold_1 = np.percentile(filtered_errors_1, threshold_percentile) if not filtered_errors_1.empty else np.nan threshold_0 = np.percentile(filtered_errors_0, threshold_percentile) if not filtered_errors_0.empty else np.nan df_result['threshold'] = np.where(df_result['is_work'] == 1, threshold_1, threshold_0) df_result['is_anomaly'] = (reconstruction_errors > threshold).astype(int) # 可选隐藏层特征 if param.get('options', {}).get('params', {}).get('return_hidden', False): intermediate_model = Model(inputs=model.inputs, outputs=model.layers[1].output) hidden = intermediate_model.predict(X) hidden_df = pd.DataFrame(hidden, columns=[f"hidden_{i}" for i in range(hidden.shape[1])]) df_result = pd.concat([df_result, hidden_df], axis=1) return df_result       I used apply to call this model, but I want to see the threshold field returned in df_result.      
Hello, Splunkers! Couple of days ago I was trying to test the Splunk UI Toolkit, but I couldn't connect to Splunk Cloud, I also couldn't find any documentation related to Cloud, so do you know how t... See more...
Hello, Splunkers! Couple of days ago I was trying to test the Splunk UI Toolkit, but I couldn't connect to Splunk Cloud, I also couldn't find any documentation related to Cloud, so do you know how to make it work?   I'll really appreciate your help and reply Maximiliano Lopes
I'm trying to resize text in a pie graph or column graph in a splunk dashboard studio but I'm not finding a way to do it. Does anyone know if there's a way to resize text?
Hello S plunk Support, We are using S plunk Cloud in our company and we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with... See more...
Hello S plunk Support, We are using S plunk Cloud in our company and we need the contact details of our S plunk Cloud Account Manager to update our internal records. Could you please provide us with the name, email, and contact information of our assigned account manager?  
we have our environment in google cloud platform where we have SH cluster with 3 SH. and earlier the issue was notable index data was getting stored locally in each search head to fix this we have c... See more...
we have our environment in google cloud platform where we have SH cluster with 3 SH. and earlier the issue was notable index data was getting stored locally in each search head to fix this we have created the notable index at indexer cluster and then forwarded the SH data toward the Indexer cluster using "indexer discovery" method, now the problem is the configuration (props.conf & transform.conf) which were responsible to redirect the data to notable index locally (each SH) are not taking effect to forward the data into notable index created in indexer cluster. however internal index data are forwarding now in the indexer cluster.