All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

I would like to know when I will install MITRE app in Enterprise Security then it will automatically populate the dashboard or I need to adjust my use case naming conventions too? For Example Right n... See more...
I would like to know when I will install MITRE app in Enterprise Security then it will automatically populate the dashboard or I need to adjust my use case naming conventions too? For Example Right now Use Cases are not mapped as per MITRE Techniques.   Example currently use case name is : "Failed Logon Accounts" but to use MITRE app for Splunk; do I need to modify the use case name to "T1110-Failed Logon Accounts"?
Hello, We are wondering if anyone else has experienced issues using a k8 cluster of heavy forwarders, to receive AWS firehose data into a GCP Splunk enterprise setup via HEC. However we are seeing lo... See more...
Hello, We are wondering if anyone else has experienced issues using a k8 cluster of heavy forwarders, to receive AWS firehose data into a GCP Splunk enterprise setup via HEC. However we are seeing lots of duplicates of the data and also a flip on that, some timeouts meaning the event is sent to the s3 bucket rather than being ingested in Splunk. We thought this was an isolated issue in our setup, so we setup a pre-prod environment with the same setup and the same problem is occurring.
Hello All,  Thought I had this down, but not quite. So here is the scenario. I have two Fields  1. "Sent Invite Time"  and 2. "Received Invite Time". Received Invite Time should happen 1440 min from... See more...
Hello All,  Thought I had this down, but not quite. So here is the scenario. I have two Fields  1. "Sent Invite Time"  and 2. "Received Invite Time". Received Invite Time should happen 1440 min from the time "Sent Invite Time occurred" and then searching for when the duration it took between those two fields is over 1440 in min.  The problem I have is that I am getting fields that are coming up as Not Received Invite this is because its not giving Field 2 "Received Invite Time" 1440 min to complete. So how can I do that - have Field 1"Sent Invite Time"  and give it 24 hours for Field 2 to occur from the start of the time that field 1 occurred  ? I was hoping to do this in the where clause....    | where Field1-Field2>1440  
My private splunk app is showing fail in python readiness app. But, My codes are compatible for python 3 with splunk 8.2.1 version. Below is the issue which is showing. I have tried updating server.c... See more...
My private splunk app is showing fail in python readiness app. But, My codes are compatible for python 3 with splunk 8.2.1 version. Below is the issue which is showing. I have tried updating server.conf file with [general] python version =python3. But, this doesn't worked.  Can someone share your suggestion .    Private Apps ABC Private App Fail Details This app is not compatible with Python 3. Application Path /opt/splunk/etc/apps/ABC Required Action Update this app or uninstall it. If you do nothing, the app will fail. Dismiss App Email Result Export Result Issue: File path designates Python 2 library.
Hello, We are facing issue while  using website monitoring APP to monitor URL's as it increases Splunk PID's and stops splunk service automatically in our HF. Is there any other way(script/addon) t... See more...
Hello, We are facing issue while  using website monitoring APP to monitor URL's as it increases Splunk PID's and stops splunk service automatically in our HF. Is there any other way(script/addon) to monitor URL's in splunk without making use of Website Monitoring App. Thanks
Hi All, I am facing some error issues with Tripwire Enterprise Add-on. So It will be helpful to me if anyone provides me with Tripwire Enterprise Add-on support Team details. Thanks in advance.   ... See more...
Hi All, I am facing some error issues with Tripwire Enterprise Add-on. So It will be helpful to me if anyone provides me with Tripwire Enterprise Add-on support Team details. Thanks in advance.   Best regards, Vinod kumar 
Hi, I'm using phantom v4.10.3.51237 and my VA team found a security vulnerability that is "nginx Byte Memory Overwrite RCE " Is it possible to update nginx from v1.19.2 to v1.20.1?
Hey All, I was creating an app that runs inside Splunk. I was following this tutorial , https://splunkui.splunk.com/Create/AppTutorial . When I try to make use of just a dashboard definition, I... See more...
Hey All, I was creating an app that runs inside Splunk. I was following this tutorial , https://splunkui.splunk.com/Create/AppTutorial . When I try to make use of just a dashboard definition, I think it has an issue with authentication or something like that. I could not figure out how to make use of the dashboard studio dashboard definitions to create visuals. All of the examples, show hard coded datasources. I’m stumped on how to create views that depend on Splunk query results in ReactJS.  Also, I am using Splunk Enterprise with a Developer license. To make things short:  How do i use a Splunk Query as a datasource for a Reactjs Splunk App for Splunk Enterprise?  Ted
I have a QR String that when put in our custom QR divider can took it apart nicely. But I can't use the field extraction for this. How do I use custom rex. Example: - My QR String:000201010212530370... See more...
I have a QR String that when put in our custom QR divider can took it apart nicely. But I can't use the field extraction for this. How do I use custom rex. Example: - My QR String:00020101021253037045405100005802VN38620010A0000007270132000697110001180003131000000032040208QRIBFTTA624101121000000032040821 ORD_6328416304A3AC - The string I want to take out is 00069711000118000313100000003204 and 000697 (The first 6 characters) as field1 and field2. Fortunately the 4 characters before and 4 characters behind don't chance.
I have an alert that I want to run between 23:00PM to 6:00AM, during that time, run the search "Last 24 hours", and email the result at 8:00 everyday. I have yet to found a way to trigger this.
We have an OTEL compliant exporter.  What are the required definitions in an exporter configuration to send OTEL compliant data to Splunk Observability Cloud?  We do not see examples in this list - h... See more...
We have an OTEL compliant exporter.  What are the required definitions in an exporter configuration to send OTEL compliant data to Splunk Observability Cloud?  We do not see examples in this list - https://docs.splunk.com/Observability/gdi/get-data-in/get-data-in.html We do not want to use the Observability Cloud API, we desire to send directly native OTEL compliant format data. Thank You    
Hi there, I am trying to diff the new version against the one version older record and extract the diff from them. For example, ver 1.3 against 1.2 and ver 1.2 against 1.1 to only extract the diff... See more...
Hi there, I am trying to diff the new version against the one version older record and extract the diff from them. For example, ver 1.3 against 1.2 and ver 1.2 against 1.1 to only extract the diff between them.  I hope to do it in a flexible ways as in future I may have ver 1.4 and so on... I also want to limit the results to only latest 5 version diff. For example, I got 1.1, 1.2, ..., 1.10 version, but I only want the result for 1.6, 1.7, ..., 1.10 when diff against the previous one version. Is that possible?    Currently I have data like this: records: ============================================ index=a, ver=1.1, a="halo", b="haha", c="nana" index=a, ver=1.1, a="testing", b="haha", c="nana" index=a, ver=1.1, a="halo", b="kaka", c="testing"   index=a, ver=1.2, a="halo", b="haha", c="nana" index=a, ver=1.2, a="lala", b="haha", c="nana" index=a, ver=1.2, a="halo", b="kaka", c="TESTING"   index=a, ver=1.3, a="halo", b="haha", c="nana" index=a, ver=1.3, a="lala", b="haha", c="tata" index=a, ver=1.3, a="halo", b="kaka", c="lala" index=a, ver=1.3, a="halo", b="kaka", c="kakaka" ============================================   Result expected when comparing ver 1.2 against 1.1 and ver1.3 against 1.2: ver added record (merging a b c using ",") removed record (merging a b c using ",") 1.2 lala,haha,nana halo,kaka,TESTING testing,haha,nana halo,kaka,testing 1.3 lala,haha,tata halo,kaka,lala halo,kaka,kakaka lala,haha,nana halo,kaka,TESTING  
Please I need  help with a detailed splunk Data accelerated data model authentication query for sucessful  login alerts using a | tstats summariesonly=true . The query should have count threshold.  T... See more...
Please I need  help with a detailed splunk Data accelerated data model authentication query for sucessful  login alerts using a | tstats summariesonly=true . The query should have count threshold.  The query should cover all products and vendors names used in the environment. It is not only successful login based on windows.
Hello, I am having logs in splunk in below manner. timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME2 Other text" timestamp "LOGGER... See more...
Hello, I am having logs in splunk in below manner. timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME1 Other text" timestamp "LOGGER= PAGE NAME2 Other text" timestamp "LOGGER= PAGE NAME2 Other text" timestamp "LOGGER= PAGE NAME3 Other text" timestamp "LOGGER= PAGE NAME3 Other text" timestamp "LOGGER= PAGE NAME1 Other text" I formatted search query index=index-name ns="namespace" | rex field=_raw "LOGGER=\s*(?<PAGE_NAME1>PAGE NAME1*)" | stats count by PAGE_NAME1 | append [search index=index-name ns="namespace" | rex field=_raw "LOGGER=\s*(?<PAGE_NAME2>PAGE NAME2*)" | stats count by PAGE_NAME2 ] | append [search index=index-name ns="namespace" | rex field=_raw "LOGGER=\s*(?<PAGE_NAME3>PAGE NAME3*)" | stats count by PAGE_NAME3 ] Got result like PAGE_NAME1 Count PAGE_NAME2 PAGE_NAME3 PAGE NAME1 3       2 PAGE NAME2     2   PAGE NAME3 I am looking result should look below Page Name Pages Visited PAGE_NAME1 3 PAGE_NAME3 2 PAGE_NAME3 2 Any idea how to format search query ?
 I've upgraded from splunk 8.0.3 to 8.2.2, and now i'm getting errors for my metrics query. This used to work: | mstats rate(_value) prestats=true WHERE metric_name="traffic_in" AND index="em_metri... See more...
 I've upgraded from splunk 8.0.3 to 8.2.2, and now i'm getting errors for my metrics query. This used to work: | mstats rate(_value) prestats=true WHERE metric_name="traffic_in" AND index="em_metrics" AND description="EDGE" AND name_cache="EDGE" span=60s BY name_cache | timechart rate(_value) span=120s useother=false BY name_cache | fields -_span* | rename "EDGE" as traffic_in | eval Gb_in=(traffic_in*8/1000/1000/1000) | append [ | mstats rate(_value) prestats=true WHERE metric_name="traffic_out" AND index="em_metrics" AND name_cache="EDGE" span=60s BY name_cache | timechart rate(_value) span=120s useother=false BY name_cache| fields - _span* | rename "EDGE" as traffic_out | eval Gb_out=(traffic_out*8/1000/1000/1000) ] | selfjoin keepsingle=true _time| fields _time Gb_in, Gb_out Now i get an error that says The following join field(s) do not exist in the data '_time'.  Has anything changed from 8.0.3 to 8.2.2 that could explain this?
I am utilizing the Graph API TA in order to pull in logs but I need to utilize a second installment of the same app on the same HF in order to pull in federal logs.  This is due to the fact that the ... See more...
I am utilizing the Graph API TA in order to pull in logs but I need to utilize a second installment of the same app on the same HF in order to pull in federal logs.  This is due to the fact that the original app is pointing to .com addresses and need to be pointed to .us addresses.  when I install the "copied app" it will not load or display the config screen. Does anyone know what needs to be changed within the app to make this possible? I tried just renaming the folder so that it would be a "Graph app_B" but the TA will not load after making this change. Is there any other .conf file that requires adjustment? Please be as specific as possible. Thanks!!
Hi All,  We are working on getting Appian data to splunk. Appian has been configured to push the logs to splunk syslog endpoint. We have tried several things but the data received in splunk is still ... See more...
Hi All,  We are working on getting Appian data to splunk. Appian has been configured to push the logs to splunk syslog endpoint. We have tried several things but the data received in splunk is still encrypted.  https://docs.appian.com/suite/help/21.2/Log_Streaming_for_Appian_Cloud.html#prerequisite-checklist Below is the config of splunk. Please help us if you have done this is past or have knowledge on how to fix this.  Splunk Version : 8.1.4 etc\apps\search\local\inputs.conf [tcp://514] connection_host = ip index = appian sourcetype = syslog   [SSL] requireClientCert = false serverCert =  $SPLUNK_HOME\etc\auth\splunkweb\myDataCertificate.pem sslVersions = tls1.2 cipherSuite = ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:DH-DSS-AES256-GCM-SHA384:DHE-DSS-AES256-GCM-SHA384:DH-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:DH-RSA-AES256-SHA256:DH-DSS-AES256-SHA256:ADH-AES256-GCM-SHA384:ADH-AES256-SHA256:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:AES256-GCM-SHA384:AES256-SHA256 Note : myDataCertificate.pem is a combination of server+interimCA+rootCA.    Sample Encrypted Data 10/10/21 8:04:18.000 PM \x00\x00\x00\x9E\x00\x9F\xC0|\xC0}\x003\x00g\x009\x00k\x00E\x00\xBE\x00\x88\x00\xC4\x00\x00\xA2\x00\xA3\xC0\x80\xC0\x81\x002\x00@\x008\x00j\x00D\x00\xBD\x00\x87\x00\xC3\x00\x00f\x00\x00D\x00\x00\x00\x00\x00\x00\xFF\x00\x00\x00#\x00\x00\x00 10/10/21 8:04:18.000 PM \xC0r\xC0\xC0\xC0/\xC00\xC0\x8A\xC0\x8B\xC0\xC0'\xC0\xC0v\xC0\xC0\x00\x9C\x00\x9D\xC0z\xC0{\x00/\x00<\x005\x00=\x00A\x00\xBA\x00\x84\x00\xC0\x00
In my large environment we have Splunk Ent. ES in a clustered environment. We are on-boarding new teams into our Splunk environment. To avoid housing all the new team's search content under the defau... See more...
In my large environment we have Splunk Ent. ES in a clustered environment. We are on-boarding new teams into our Splunk environment. To avoid housing all the new team's search content under the default search & reporting, how does one create a separate search & reporting so the new team don't have to deal with all the search & reporting spinning on the default one. Thanks a million for your help in advance.  
I've just set up with a new account ( james_e_thompson ) on the new Splunk Portal that cut in last week on 11/11/2021. I have the Splunk on call app on my iPhone.  Question #1, do I need to update ... See more...
I've just set up with a new account ( james_e_thompson ) on the new Splunk Portal that cut in last week on 11/11/2021. I have the Splunk on call app on my iPhone.  Question #1, do I need to update or switch to a new app in conjunction with setting up a new account for the Splunk Portal?  Current app on phone is SplunkOn-Call version 7.63.688 Question #2, how to validate all is well with my new Splunk portal account and whichever app needs to be available on my phone?
Hello Experts, I have recently started exploring on Splunk API's and trying to find a REST API to Add data to a splunk index. Can someone guide me as to if this is possible please? I already refer... See more...
Hello Experts, I have recently started exploring on Splunk API's and trying to find a REST API to Add data to a splunk index. Can someone guide me as to if this is possible please? I already referred to the Splunk API documentation and couldn't find an API reference which performs the task. Any inputs/Guidance will be highly appreciated. Thanks, Nitheesh