All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

This advice continues to be helpful, thank you!
I want to use the 2nd search as a subsearch only bringing back the actions. How can I do this? SEARCH | rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time A... See more...
I want to use the 2nd search as a subsearch only bringing back the actions. How can I do this? SEARCH | rest /servicesNS/-/-/saved/searches | search title=kafka* | rename dispatch.earliest_time AS "frequency", title AS "title", eai:acl.app AS "app", next_scheduled_time AS "nextRunTime", search AS "query", updated AS "lastUpdated", action.email.to AS "emailTo", action.email.cc AS "emailCC", action.email.subject AS "emailSubject", alert.severity AS "SEV" | eval severity=case(SEV == "5", "Critical-5", SEV == "4", "High-4",SEV == "3", "Warning-3",SEV == "2", "Low-2",SEV == "1", "Info-1") | eval identifierDate=now() | convert ctime(identifierDate) AS identifierDate | table identifierDate title lastUpdated, nextRunTime, emailTo, query, severity, emailTo | fillnull value="" | sort -lastUpdated SUBSEARCH | rest "/servicesNS/-/-/saved/searches" timeout=300 splunk_server=* | search disabled=0 | eval length=len(md5(title)), search_title=if(match(title,"[-\\s_]"),("RMD5" . substr(md5(title),(length - 15))),title), user='eai:acl.owner', "eai:acl.owner"=if(match(user,"[-\\s_]"),rtrim('eai:acl.owner',"="),user), app_name='eai:acl.app', "eai:acl.app"=if(match(app_name,"[-\\s_]"),rtrim('eai:acl.app',"="),app_name), commands=split(search,"|"), ol_cmd=mvindex(commands,mvfind(commands,"outputlookup")), si_cmd=mvindex(commands,mvfind(commands,"collect")) | rex field=ol_cmd "outputlookup (?<ol_tgt_filename>.+)" | rex field=si_cmd "index\\s?=\\s?(?<si_tgt_index>[-_\\w]+)" | eval si_tgt_index=coalesce(si_tgt_index,'action.summary_index._name'), ol_tgt_filename=coalesce(ol_tgt_filename,'action.lookup.filename') | rex field=description mode=sed "s/^\\s+//g" | eval description_short=if(isnotnull(trim(description," ")),substr(description,0,127),""), description_short=if((len(description_short) > 126),(description_short . "..."),description_short), is_alert=if((((alert_comparator != "") AND (alert_threshold != "")) AND (alert_type != "always")),1,0), has_report_action=if((actions != ""),1,0) | fields + app_name, description_short, user, splunk_server, title, search_title, "eai:acl.sharing", "eai:acl.owner", is_scheduled, cron_schedule, max_concurrent, dispatchAs, "dispatch.earliest_time", "dispatch.latest_time", actions, search, si_tgt_index, ol_tgt_filename, is_alert, has_report_action | eval object_type=case((has_report_action == 1),"report_action",(is_alert == 1),"alert",true(),"savedsearch") | where is_alert==1 | eval splunk_default_app = if((app_name=="splunk_archiver" OR app_name=="splunk_monitoring_console" OR app_name="splunk_instrumentation"),1,0) | where splunk_default_app=0 | fields - splunk_server, splunk_default_app | search title=*kafka* | table actions title user
The Monitoring Console uses metrics data provided by servers with a splunk forwarder installed. The metrics data appears to use the hostname found on linux servers in the /etc/hostname file. However,... See more...
The Monitoring Console uses metrics data provided by servers with a splunk forwarder installed. The metrics data appears to use the hostname found on linux servers in the /etc/hostname file. However, our forwarders are set up with a hostname specified in the ../etc/system/local/inputs.conf where a "cname" for the host is specified. This results in a difference between the "host" used in searches and the "hostname" specified in the Monitoring Console dashboards and alerts. Is there a best practice for unifying  the host and hostname in the Monitoring Console?
Hi @ITWhisperer , Thanks for sharing.  I am okay with users. But we have few roles like engineer who should have access to all indexes. What can I do in this case? Can I give index names in drop-do... See more...
Hi @ITWhisperer , Thanks for sharing.  I am okay with users. But we have few roles like engineer who should have access to all indexes. What can I do in this case? Can I give index names in drop-down and pass that token in base search like index=$index_name$? Will it work?  BTW, is it a good practice to have a common dashboard with multiple indexes (may be 200+). It is okay for users who load Splunk because they are restricted to specific indexes. But what about the Enginner role and admin? Everytime we run the dashboard all indexes will be run by default (*) and will it be performance issues in Splunk? How to overcome this?
Hi @Gregory.Burkhead, Thank you for asking your question on Community. Since it's been a few days with no reply, did you happen to find any new information or a solution you can share? If you're... See more...
Hi @Gregory.Burkhead, Thank you for asking your question on Community. Since it's been a few days with no reply, did you happen to find any new information or a solution you can share? If you're still looking for help, you can contact AppDynamics Support: How do I open a case with AppDynamics Support? 
Hi @ckarthikin , sorry but the issue is at ingestion level: you have to assign a correctly defined sourcetype (standard or custom) to your data, then you can search your data correctly parsed and ag... See more...
Hi @ckarthikin , sorry but the issue is at ingestion level: you have to assign a correctly defined sourcetype (standard or custom) to your data, then you can search your data correctly parsed and aggregated. so the questions are the ones before: which technology? which add-on used for parsing? if none, you have to create a correct sourcetype and apply it to your data source. Ciao. Giuseppe
Hi @Alberto.Astolfi, Thank you so much for coming back and sharing the solution. 
I'm the only one with this issue? Ok, we made the decision to wipe the installations clean and installed 9.3.2. Configured deploymentclient.conf for several instances, the UI is now working fine.
Index access is controlled by role so if your separate groups of users as assigned different roles, with each role only able to access the indexes associated with their app then they can use a common... See more...
Index access is controlled by role so if your separate groups of users as assigned different roles, with each role only able to access the indexes associated with their app then they can use a common search which list all the indexes and they will each only be able to see the data from the indexes they have access to.
Assuming your events follow the pattern shown, you could try something like this | rex "[^\|]+\|(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{4})\|" | streamstats count(time) as eventnumber | stat... See more...
Assuming your events follow the pattern shown, you could try something like this | rex "[^\|]+\|(?<time>\d{4}-\d{2}-\d{2} \d{2}:\d{2}:\d{2}\.\d{4})\|" | streamstats count(time) as eventnumber | stats values(time) as time list(_raw) as event by eventnumber | eval _time=strptime(time,"%F %T.%4N") This will also reset the _time timestamp to the same as found in the event data
We want to be able to easily search IOCs in SOAR. The Indicators tab is inconsistent. It shows IOCs for some but not all and when comparing, it isn't clear why one is there and another is not.  ... See more...
We want to be able to easily search IOCs in SOAR. The Indicators tab is inconsistent. It shows IOCs for some but not all and when comparing, it isn't clear why one is there and another is not.  Could there be a way to ensure everything shows up using tagging? If so, how? Thanks!
Hi @Karthikeya , I could be more detailed, if you can share the full event, anyway, you have to create a regex like the following:   | rex "host: (?<host>[^\s\n]+)"   Ciao. Giuseppe
Trying to get permanent field extraction for a field. Tried to use field extraction tabs in fields given regex there but getting failed. Giving the same in search working. I don't know why. Below is... See more...
Trying to get permanent field extraction for a field. Tried to use field extraction tabs in fields given regex there but getting failed. Giving the same in search working. I don't know why. Below is the event: host: juniper-uat.systems.fed Connection: keep-alive sec-ch-ua-platform: ""Windows"" X-Requested-With: XMLHttpRequest Need to extract the host value as 'fqdn' permanently. given this regex - Host:\s(?<fqdn>(.+))\n in field extraction as attached below: But it is extracting whole event value starting from fqdn value. Not extracting correctly. Please help me in this regard.
Morning everyone, Been having a rough go trying to get some usable web usage reports out of splunk for my Palo Alto traffic.  Specifically trying to do what i think is a semi simple thing.  My test ... See more...
Morning everyone, Been having a rough go trying to get some usable web usage reports out of splunk for my Palo Alto traffic.  Specifically trying to do what i think is a semi simple thing.  My test is going to a website like amazon and then navigating around on the site looking at different products (robotic vacuums in my case).  Then i look at the traffic in splunk which reports back as giving me only say "2 Hits".   Palo reports the following:   I set my policy in palo to log at session start.    My search in splunk is this: index="pan_firewall" log_subtype="url" chris.myers dest_zone="L3-Untrust" url="www.*" user!="*solarwinds*" user!="*service*" user!=unknown http_category!="work-related" http_category!="health-and-medicine" http_category!="government" http_category!="web-advertisements" url!="ad.*" url!="www.abuseipdb.com*" url!="www.userbenchmark.com*" url!="www.xboxab.com*" url!="www.microsoft.com*" url!="www.content.shi.com*" url!="www.shi.com*" url!="www.workday.com*" url!="www.patientfirst.visualstudio.com*" url!="www.malwarebytes.com*" url!="www.globalknowledge.com*" url!="www.jetbrains.com*" url!="www.dnnsoftware.com*" url!="www.juniper.net*" url!="www.intel.com*" url!="www.cpug.org*" url!="www.vmware.com*" url!="www.csirt.org*" url!="ads.*" url!="www.vwc.state.va.us*" url!="www.atlantichealth.org*" url!="www.uhcprovider.com*" url!="www.checkpoint.com*" url!=*rumiview.com* url!="*bing.com*" url!="www.facebook.com/plugins/*" url!="www.codechef.com*" url!="www.splunk.com*" url!="www.aetna.com*" url!="www.radmd.com*" url!="www.humanamilitary.com*" url!="www.myamerigroup.com*" url!="www.providerportal.com*" url!="www.vcuhealth.org*" url!="www.workcomp.virginia.gov*" url!="www.cisco.com*" url!="www.va.gov*" url!="www.wcc.state.md.us*" url!=www.kraken.com* url!="www.medicaid.gov*" url!="www.scc.virginia.gov*" url!="www.dli.pa.gov*" url!="www.maryland.gov*" url!="www.hscrc.state.md.us*" url!="www.msftncsi.com*" url!="*.msftconnecttest.com*" url!="*.msftconnect.com*" url!="*.manageengine.com*" url!="*.ibm.com*" url!="*.paloaltonetworks.com*" url!="www.nowinstock.net*" url!="*.centurylink.com*" url!="*.static-cisco.com*" url!="*.arin.net*" url!="www.facebook.com/connect/*" url!="www.facebook.com/third_party/urlgen_redirector/*" url!="*windstreamonline.com*" url!=*google* dest_hostname!=*fe2.update.microsoft.com dest_hostname!=crl.microsoft.com url!=*windowsupdate* url!="www.telecommandsvc*" url!="www.redditstatic*" url!="www.redditmedia*" url!="www.gravatar.*" dest_hostname!=*icloud.com dest_hostname!=*gstatic.com url!=*.js url!=*.jpg url!=*.png url!=*.gif url!=*.svg url!=*.jpeg url!=*.css | where isnull(referrer) | top limit=25 dest_hostname | rename dest_hostname as URL | table URL, count And my result is this: What am i missing, or what am i not understanding.  I would expect for every page i visit for every vacuum i look at to be 1 hit.  But my understanding has to be wrong as 1, i went and viewed over 15 individual vacuums, so different product urls.  Palo doesn't even seem log it.  I am expecting to see something like this listed,  https://www.amazon.com/Kokaidia-Navigation-Suction-Robotic-Cleaner/dp/B0DFT3B813/?_encoding=UTF8&pd_rd_w=7SKOA&content-id=amzn1.sym.7768b967-1ffc-4782-be79-a66c5b1b9899&pf_rd_p=7768b967-1ffc-4782-be79-a66c5b1b9899&pf_rd_r=KFZVRKBXWB2AZG55ENKY&pd_rd_wg=CjTy7&pd_rd_r=25c15f6d-78b3-46dc-8484-285dfeef98e2&ref_=pd_hp_d_atf_dealz_cs   I also looked at our Palo Alto application that is installed in splunk, but it is just throwing a java script error and providing no data output so i have to visit that later.  So not even trying to pull that into the conversation unless someone were to say, that is how i should be looking at it and my search queries are the problem.   I know someone has experience with this and welcome any and all input.  I am banging my head against the wall and open to anything.
first dashboard - Base Search index=A OR B |search attack_type = "$att_type$" severity = "$severity$" vs_name = "$vs_name$" violations = "$violations$" sub_violations = "$sub_viol$" uri = "$uri$" 2... See more...
first dashboard - Base Search index=A OR B |search attack_type = "$att_type$" severity = "$severity$" vs_name = "$vs_name$" violations = "$violations$" sub_violations = "$sub_viol$" uri = "$uri$" 2nd dashboard - Base Search index=C OR D |search attack_type = "$att_type$" severity = "$severity$" vs_name = "$vs_name$" violations = "$violations$" sub_violations = "$sub_viol$" uri = "$uri$" Log format is similar but need to merge these dashboards to one and all app owners will have access to this common dashboard and they should access to their respective app indexes only.
This can be achieved using props.conf.  Try these settings to start with [mysourcetype] ```The "Great Eight" settings``` SHOULD_LINEMERGE = false ```Break lines only between a line ending and a date... See more...
This can be achieved using props.conf.  Try these settings to start with [mysourcetype] ```The "Great Eight" settings``` SHOULD_LINEMERGE = false ```Break lines only between a line ending and a date (year)``` LINE_BREAKER = ([\r\n]+)\d{4} TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 ```Two settings for UFs``` EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)\d{4}
Hello, I'm not exactly sure where to check the logs for this Add-On. Am I looking in Splunk or am I looking on the Azure side?
I believe this is another case of unclear documentation. The useSSL setting, as seen in the doc snippet you posted, does not say you don't need a cert, it says you don't need to set clientCert on th... See more...
I believe this is another case of unclear documentation. The useSSL setting, as seen in the doc snippet you posted, does not say you don't need a cert, it says you don't need to set clientCert on the forwarder if the receiver has requireClientCert = false. In other words, the 'useSSL' setting on the forwarder is telling that forwarder to use TLS authentication, which is different than just encrypting your logs with TLS, which uses the TLS cert from the receiver. If you wish to encrypt your logs but don't need the receiver to require client TLS certs to authenticate, you don't need the useSSL=true setting. The other settings you listed such as check CN and SAN that the receiver cert matches the indexer you listed, are not required since you told the client to not require a server cert when connecting. So there are 3 related but distinct TLS topics here: log encryption using TLS, the forwarder authenticating the server using TLS, and the receiver authenticating the forwarder using TLS. The .conf.spec docs are not clear about which settings are for which TLS function, making it confusing. useSSL = <true|false|legacy> * Whether or not the forwarder uses SSL to connect to the receiver, or relies on the 'clientCert' setting to be active for SSL connections. * You do not need to set 'clientCert' if 'requireClientCert' is set to "false" on the receiver.  
Can you try   https://community.splunk.com/t5/Knowledge-Management/Solutions-quot-Splunk-could-not-get-the-description-for-this/td-p/694752
Try this workaround   https://community.splunk.com/t5/Knowledge-Management/Solutions-quot-Splunk-could-not-get-the-description-for-this/td-p/694752