All Posts

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Posts

We want to be able to easily search IOCs in SOAR. The Indicators tab is inconsistent. It shows IOCs for some but not all and when comparing, it isn't clear why one is there and another is not.  ... See more...
We want to be able to easily search IOCs in SOAR. The Indicators tab is inconsistent. It shows IOCs for some but not all and when comparing, it isn't clear why one is there and another is not.  Could there be a way to ensure everything shows up using tagging? If so, how? Thanks!
Hi @Karthikeya , I could be more detailed, if you can share the full event, anyway, you have to create a regex like the following:   | rex "host: (?<host>[^\s\n]+)"   Ciao. Giuseppe
Trying to get permanent field extraction for a field. Tried to use field extraction tabs in fields given regex there but getting failed. Giving the same in search working. I don't know why. Below is... See more...
Trying to get permanent field extraction for a field. Tried to use field extraction tabs in fields given regex there but getting failed. Giving the same in search working. I don't know why. Below is the event: host: juniper-uat.systems.fed Connection: keep-alive sec-ch-ua-platform: ""Windows"" X-Requested-With: XMLHttpRequest Need to extract the host value as 'fqdn' permanently. given this regex - Host:\s(?<fqdn>(.+))\n in field extraction as attached below: But it is extracting whole event value starting from fqdn value. Not extracting correctly. Please help me in this regard.
Morning everyone, Been having a rough go trying to get some usable web usage reports out of splunk for my Palo Alto traffic.  Specifically trying to do what i think is a semi simple thing.  My test ... See more...
Morning everyone, Been having a rough go trying to get some usable web usage reports out of splunk for my Palo Alto traffic.  Specifically trying to do what i think is a semi simple thing.  My test is going to a website like amazon and then navigating around on the site looking at different products (robotic vacuums in my case).  Then i look at the traffic in splunk which reports back as giving me only say "2 Hits".   Palo reports the following:   I set my policy in palo to log at session start.    My search in splunk is this: index="pan_firewall" log_subtype="url" chris.myers dest_zone="L3-Untrust" url="www.*" user!="*solarwinds*" user!="*service*" user!=unknown http_category!="work-related" http_category!="health-and-medicine" http_category!="government" http_category!="web-advertisements" url!="ad.*" url!="www.abuseipdb.com*" url!="www.userbenchmark.com*" url!="www.xboxab.com*" url!="www.microsoft.com*" url!="www.content.shi.com*" url!="www.shi.com*" url!="www.workday.com*" url!="www.patientfirst.visualstudio.com*" url!="www.malwarebytes.com*" url!="www.globalknowledge.com*" url!="www.jetbrains.com*" url!="www.dnnsoftware.com*" url!="www.juniper.net*" url!="www.intel.com*" url!="www.cpug.org*" url!="www.vmware.com*" url!="www.csirt.org*" url!="ads.*" url!="www.vwc.state.va.us*" url!="www.atlantichealth.org*" url!="www.uhcprovider.com*" url!="www.checkpoint.com*" url!=*rumiview.com* url!="*bing.com*" url!="www.facebook.com/plugins/*" url!="www.codechef.com*" url!="www.splunk.com*" url!="www.aetna.com*" url!="www.radmd.com*" url!="www.humanamilitary.com*" url!="www.myamerigroup.com*" url!="www.providerportal.com*" url!="www.vcuhealth.org*" url!="www.workcomp.virginia.gov*" url!="www.cisco.com*" url!="www.va.gov*" url!="www.wcc.state.md.us*" url!=www.kraken.com* url!="www.medicaid.gov*" url!="www.scc.virginia.gov*" url!="www.dli.pa.gov*" url!="www.maryland.gov*" url!="www.hscrc.state.md.us*" url!="www.msftncsi.com*" url!="*.msftconnecttest.com*" url!="*.msftconnect.com*" url!="*.manageengine.com*" url!="*.ibm.com*" url!="*.paloaltonetworks.com*" url!="www.nowinstock.net*" url!="*.centurylink.com*" url!="*.static-cisco.com*" url!="*.arin.net*" url!="www.facebook.com/connect/*" url!="www.facebook.com/third_party/urlgen_redirector/*" url!="*windstreamonline.com*" url!=*google* dest_hostname!=*fe2.update.microsoft.com dest_hostname!=crl.microsoft.com url!=*windowsupdate* url!="www.telecommandsvc*" url!="www.redditstatic*" url!="www.redditmedia*" url!="www.gravatar.*" dest_hostname!=*icloud.com dest_hostname!=*gstatic.com url!=*.js url!=*.jpg url!=*.png url!=*.gif url!=*.svg url!=*.jpeg url!=*.css | where isnull(referrer) | top limit=25 dest_hostname | rename dest_hostname as URL | table URL, count And my result is this: What am i missing, or what am i not understanding.  I would expect for every page i visit for every vacuum i look at to be 1 hit.  But my understanding has to be wrong as 1, i went and viewed over 15 individual vacuums, so different product urls.  Palo doesn't even seem log it.  I am expecting to see something like this listed,  https://www.amazon.com/Kokaidia-Navigation-Suction-Robotic-Cleaner/dp/B0DFT3B813/?_encoding=UTF8&pd_rd_w=7SKOA&content-id=amzn1.sym.7768b967-1ffc-4782-be79-a66c5b1b9899&pf_rd_p=7768b967-1ffc-4782-be79-a66c5b1b9899&pf_rd_r=KFZVRKBXWB2AZG55ENKY&pd_rd_wg=CjTy7&pd_rd_r=25c15f6d-78b3-46dc-8484-285dfeef98e2&ref_=pd_hp_d_atf_dealz_cs   I also looked at our Palo Alto application that is installed in splunk, but it is just throwing a java script error and providing no data output so i have to visit that later.  So not even trying to pull that into the conversation unless someone were to say, that is how i should be looking at it and my search queries are the problem.   I know someone has experience with this and welcome any and all input.  I am banging my head against the wall and open to anything.
first dashboard - Base Search index=A OR B |search attack_type = "$att_type$" severity = "$severity$" vs_name = "$vs_name$" violations = "$violations$" sub_violations = "$sub_viol$" uri = "$uri$" 2... See more...
first dashboard - Base Search index=A OR B |search attack_type = "$att_type$" severity = "$severity$" vs_name = "$vs_name$" violations = "$violations$" sub_violations = "$sub_viol$" uri = "$uri$" 2nd dashboard - Base Search index=C OR D |search attack_type = "$att_type$" severity = "$severity$" vs_name = "$vs_name$" violations = "$violations$" sub_violations = "$sub_viol$" uri = "$uri$" Log format is similar but need to merge these dashboards to one and all app owners will have access to this common dashboard and they should access to their respective app indexes only.
This can be achieved using props.conf.  Try these settings to start with [mysourcetype] ```The "Great Eight" settings``` SHOULD_LINEMERGE = false ```Break lines only between a line ending and a date... See more...
This can be achieved using props.conf.  Try these settings to start with [mysourcetype] ```The "Great Eight" settings``` SHOULD_LINEMERGE = false ```Break lines only between a line ending and a date (year)``` LINE_BREAKER = ([\r\n]+)\d{4} TIME_PREFIX = ^ TIME_FORMAT = %Y-%m-%d %H:%M:%S.%4N MAX_TIMESTAMP_LOOKAHEAD = 30 TRUNCATE = 10000 ```Two settings for UFs``` EVENT_BREAKER_ENABLE = true EVENT_BREAKER = ([\r\n]+)\d{4}
Hello, I'm not exactly sure where to check the logs for this Add-On. Am I looking in Splunk or am I looking on the Azure side?
I believe this is another case of unclear documentation. The useSSL setting, as seen in the doc snippet you posted, does not say you don't need a cert, it says you don't need to set clientCert on th... See more...
I believe this is another case of unclear documentation. The useSSL setting, as seen in the doc snippet you posted, does not say you don't need a cert, it says you don't need to set clientCert on the forwarder if the receiver has requireClientCert = false. In other words, the 'useSSL' setting on the forwarder is telling that forwarder to use TLS authentication, which is different than just encrypting your logs with TLS, which uses the TLS cert from the receiver. If you wish to encrypt your logs but don't need the receiver to require client TLS certs to authenticate, you don't need the useSSL=true setting. The other settings you listed such as check CN and SAN that the receiver cert matches the indexer you listed, are not required since you told the client to not require a server cert when connecting. So there are 3 related but distinct TLS topics here: log encryption using TLS, the forwarder authenticating the server using TLS, and the receiver authenticating the forwarder using TLS. The .conf.spec docs are not clear about which settings are for which TLS function, making it confusing. useSSL = <true|false|legacy> * Whether or not the forwarder uses SSL to connect to the receiver, or relies on the 'clientCert' setting to be active for SSL connections. * You do not need to set 'clientCert' if 'requireClientCert' is set to "false" on the receiver.  
Can you try   https://community.splunk.com/t5/Knowledge-Management/Solutions-quot-Splunk-could-not-get-the-description-for-this/td-p/694752
Try this workaround   https://community.splunk.com/t5/Knowledge-Management/Solutions-quot-Splunk-could-not-get-the-description-for-this/td-p/694752
Try  https://community.splunk.com/t5/Knowledge-Management/Solutions-quot-Splunk-could-not-get-the-description-for-this/td-p/694752
Hi @splunklearner . could you share the two main searches in the two dashboards? Ciao. Giuseppe
Checkout alternative workaround. https://community.splunk.com/t5/Knowledge-Management/Solutions-quot-Splunk-could-not-get-the-description-for-this/td-p/694752
Hi @Hemant_h , if the structure of your events is fixed, you coult try something like this: | rex field=product_name "^(?<field1>\w+)\s[^\s]+\s[^\s]+\s[^\s]+\s[^\s]+\s(?<field2>[^ ]+)"  Ciao. Giu... See more...
Hi @Hemant_h , if the structure of your events is fixed, you coult try something like this: | rex field=product_name "^(?<field1>\w+)\s[^\s]+\s[^\s]+\s[^\s]+\s[^\s]+\s(?<field2>[^ ]+)"  Ciao. Giuseppe
If it's not related to windows restart, Checkout alternative workaround. https://community.splunk.com/t5/Knowledge-Management/Solutions-quot-Splunk-could-not-get-the-description-for-this/td-p/694752
Till date, we have seperate dashboards for seperate application teams. Now the ask is to create a common dashboard for all applications. Is it really possible? We have restricted users via index to ... See more...
Till date, we have seperate dashboards for seperate application teams. Now the ask is to create a common dashboard for all applications. Is it really possible? We have restricted users via index to refrain from other applications. We dont have any app_name specific in logs as well... Only index wise logs are segregated and sourcetype is also same. The log format for all applications is similar.  How can I achieve this? Should I extract app_name from the host we have and keep it in drop-down and involve index as well in drop-down?  Is it really possible? Please help me with your action plan for this.
How does it mess up the nexpose appliance?
We are a small Managed Service Provider (MSP) currently testing Splunk with a deployment on a Windows 2019 server using a trial version. After adding a remote device and integrating the Fortinet Fort... See more...
We are a small Managed Service Provider (MSP) currently testing Splunk with a deployment on a Windows 2019 server using a trial version. After adding a remote device and integrating the Fortinet FortiGate App for Splunk, the system functioned well initially. However, the next day, we noticed that the system encountered 5 violations in one night. Subsequently, when accessing the dashboard, we were greeted with the following message: "Error in 'rtlitsearch' command: Your Splunk license expired or you have exceeded your license limit too many times. Renew your Splunk license by visiting www.splunk.com/store." Is there a way to resolve this issue without performing a full reinstall? Additionally, is there a way to set a limit on the amount of data being indexed to avoid triggering the violation? I have come across references to routing logs to the "nullQueue" and would appreciate feedback from the community on this approach or any other recommended solutions. Thank you in advance for your help!
Hello, I really need help. Why is the 'Create Server' button in the 'APPSplunk App for SOAR' disabled? After installing this app on the Splunk Searchhead Cluster 9.3.1 through the Deployer, I still d... See more...
Hello, I really need help. Why is the 'Create Server' button in the 'APPSplunk App for SOAR' disabled? After installing this app on the Splunk Searchhead Cluster 9.3.1 through the Deployer, I still don't have a role named 'Phantom'. Please, I would really appreciate a response." Let me know if you need anything else!
Hi, Thanks for the response. we don't have _time but we have Time column (Indexed time - it will be same for all events so we cant use Time column). My expectation is without timestamp events need t... See more...
Hi, Thanks for the response. we don't have _time but we have Time column (Indexed time - it will be same for all events so we cant use Time column). My expectation is without timestamp events need to be merged with previous events using any logic  need not save results and it will be used for some calculation and then it will be saved in saved search) Yes always this is the case for all logs, so i need to write a query to transform this, Please help on this  and share your comments