All Topics

Find Answers
Ask questions. Get answers. Find technical product solutions from passionate members of the Splunk community.

All Topics

Hi All, I am using Splunk Cloud where I have an index whose retention period is set as 10 years, so I just want to understand how can I get an email alert once my index is close to its retention ... See more...
Hi All, I am using Splunk Cloud where I have an index whose retention period is set as 10 years, so I just want to understand how can I get an email alert once my index is close to its retention period .
Hi, I went to the search of my own app I created a extracted field using the wizard.  Once created, I go to Settings--> Fields extractions I can see the extracted field , type inline, assigned to m... See more...
Hi, I went to the search of my own app I created a extracted field using the wizard.  Once created, I go to Settings--> Fields extractions I can see the extracted field , type inline, assigned to my app , enabled and with permissions on the App for everyone read and write. then I go to my app once again and I perform a simple query in verbose mode. To be sure I also click on All fields to be sure that all fields are actually shown    index=cisco sourcetype="cisco:esa:amp"   Unfortunately the extracted field does not show on the list. any idea what I am missing? many thanks  
I am onboarding data from trend micro portable security via HEC. As per the documentation of trend micro it needs 5 indexes to be created at splunk end namely scanned log,detectedlog,assetinfo,update... See more...
I am onboarding data from trend micro portable security via HEC. As per the documentation of trend micro it needs 5 indexes to be created at splunk end namely scanned log,detectedlog,assetinfo,updateinfo,application info . We have created these indexes on the HF and used the following transforms to send it to a single index in the indexers. The transforms used is [trendmicro_routing] REGEX = . DEST_KEY = _MetaData:Index FORMAT = app_trendmicro also we have created different sourcetypes for each of the 5 categories of logs (scanned log,detectedlog,assetinfo,updateinfo,application info ). The transforms stanzas used are [tmps_scannedlogs] REGEX = scannedFiles\= FORMAT = sourcetype::tmps_scannedlogs DEST_KEY = MetaData:Sourcetype [tmps_detectedlogs] REGEX = threatType\= FORMAT = sourcetype::tmps_detectedlogs DEST_KEY = MetaData:Sourcetype [tmps_assetinfo] REGEX = physicalMemory\= FORMAT = sourcetype::tmps_assetinfo DEST_KEY = MetaData:Sourcetype [tmps_applicationinfo] REGEX = installPath\= FORMAT = sourcetype::tmps_applicationinfo DEST_KEY = MetaData:Sourcetype [tmps_updateinfo] REGEX = ^(?!.*(scannedFiles|threatType|physicalMemory|installPath)).* FORMAT = sourcetype::tmps_updateinfo DEST_KEY = MetaData:Sourcetype Now the scanned and detected logs have a different time format which is like -> startTime=Jun 13 2022 14:29:4 Asset info logs have a different  time format like -> systemDateAndTime=16062022 12:47:26 and rest of the log types (updateinfo,applicatioinfo) does not have a timestamp. And what i understand is we can not apply timestamp settings after routing it to different sourcetypes. How to make splunk  parse different timeformat and apply proper settings???
Hello Splunkers!! Can anyone please share your thoughts on whether we can monitor .accdb files via Splunk i.e. Integrating MS Access in Splunk.. can we use Splunk DB Connect? Thanks in Advance!! ... See more...
Hello Splunkers!! Can anyone please share your thoughts on whether we can monitor .accdb files via Splunk i.e. Integrating MS Access in Splunk.. can we use Splunk DB Connect? Thanks in Advance!!  - Sarah
Hi, I'm doing a project and I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. On forwarders exist a script that sen... See more...
Hi, I'm doing a project and I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. On forwarders exist a script that sends me logs of every processes that's running on server. I would to create a dynamic list where logs of processes is added and tagged as "Well-Knowned Processes".   After that when new logs of processes come to indexer they are compared with logs on dynamic list and if the process was not recognized (doesn't exist in the list) the alert is triggered. I would to do that to check suspicious process. Thanks   
Dear Sir or madam, we tried to install the application "Splunk DB Connect" from within splunk's "Browse more apps" interface. Since this installation failed with the message "", we tried the manua... See more...
Dear Sir or madam, we tried to install the application "Splunk DB Connect" from within splunk's "Browse more apps" interface. Since this installation failed with the message "", we tried the manual download of the app. As we went on to verify the downloads hash, we noticed that the hash of the download differs from the one presented after the download. Splunkbase - Website (https://splunkbase.splunk.com/app/2686/) SHA256 checksum (splunk-db-connect_390.tgz) 39ccb237a66ff15e71d7db6ed08b8fa03c33a6ebc0455ff47c3cbe6a2d164838 Lokal splunk-db-connect_390.tgt 47EE54FFF7B4017C97BBF29DEE31C049AB77882ABED4EC4D6FC40DA3F90DF243 (SHA265) The "Browse more Apps" - Interface of Splunk also mention that the App was updated about 7h ago. Kind regards, Frederik Brönner
Hi, I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. I want to create an alert for a specific use case. This ale... See more...
Hi, I've installed Splunk Trial Enterprise on a server and Universal Forwarder on other three servers (with Ubuntu) that sends me logs. I want to create an alert for a specific use case. This alert has to send a notify and run a script on the server where the alert was triggered. Is that possible?  Thanks
Hi, I need to join data on my 2 source A and B on the fields "Workitems_URL" and "Work Item URL"  In source B, there is field "Type Name" that all of its join match must have this field contain val... See more...
Hi, I need to join data on my 2 source A and B on the fields "Workitems_URL" and "Work Item URL"  In source B, there is field "Type Name" that all of its join match must have this field contain value "Default" for a valid match. If any of the matches do not have "Type Name" as "Default" then it will not be counted. I tried to use a filter "Type Name"="Default" for the source B but it does not seem to be correct. Please let me know how to make this work Thanks  
Hi All,  Below are 2 sets of raw events from my DDOS appliance.  The sets are separated based on the eventID field.  In the 1st set, there are 3 events for the eventID=7861430818955774485  while in t... See more...
Hi All,  Below are 2 sets of raw events from my DDOS appliance.  The sets are separated based on the eventID field.  In the 1st set, there are 3 events for the eventID=7861430818955774485  while in the 2nd set the eventID=7861430818955774999 has 2 events. Also, there is field called status which is a multi value field.  You will notice, in the 1st set the status has 3 values- starting, holding and end.   While in 2nd one, status has 2 values starting and hold.             Jun 20 13:58:05 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:57:38+08:00;eventID=7861430818955774485;status=starting;dstip=10.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=3450,bps=39006800;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 13:59:05 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:58:07+08:00;eventID=7861430818955774485;status=holding;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 14:00:07 172.x.x.x logtype=attackevent;datetime=2022-06-20 13:59:07+08:00;eventID=7861430818955774485;status=end;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 13:58:05 64.x.x.x logtype=attackevent;datetime=2022-06-20 13:57:38+08:00;eventID=7861430818955774999;status=starting;dstip=10.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=3450,bps=39006800;subtype=FIN/RST Flood;attackDirection=inbound; Jun 20 13:59:05 64.x.x.x logtype=attackevent;datetime=2022-06-20 13:58:07+08:00;eventID=7861430818955774999;status=holding;dstip=14.x.x.x;eventType=DDoS Attack Alert;severity=high;description=pps=0,bps=0;subtype=FIN/RST Flood;attackDirection=inbound;               My requirement is only to show (in table or stats) those eventIDs that do not have a status=end in their span.  Basically from the above raw events, the  search should filter out the 1st set completely and only show data about the 2nd set because there is no status=end for eventID=7861430818955774999 .  How to achieve this? I am using below search,  but it still ends up listing the 1st eventID also.           ...base search... | eval start=if(in(status, "starting"), _time, NULL), end =if(in(status,"holding"), _time, NULL) | stats earliest(start) as start_time , latest(end) as end_time, values(eventID) , values(status), values(eventType), values(severity), values(dstip), values(subtype) dc(status) as dc_status BY eventID | where dc_status > 2 | eval duration=tostring((end_time-start_time), "duration") | convert ctime(start_time)| convert ctime(end_time) | rename values(*) as *           If i put status!=end in my base search,  it still shows the other events where status=starting or holding for that eventID.   I want to eliminate the eventID completely  ( as in all events for that eventID) from my results which have status=end.  Any suggestions?
We have Prod and Non Prod environments, about 2 weeks ago we started to get this issue on our Non Prod environment. I have compared my outputs.conf files for all my HF forwarders and have found no is... See more...
We have Prod and Non Prod environments, about 2 weeks ago we started to get this issue on our Non Prod environment. I have compared my outputs.conf files for all my HF forwarders and have found no issues with it, telnet to the indexers from the HFs, certificates are in order.  I have also compared the Non Prod output.conf to my PROD outputs.conf files. After searching online I have not found anything about this.  Has anyone come across it before? Any help would be appreciated!  
Hi Community, Could any of you please let me know if there is any way or pre written app to connect Azure Sentinal with Splunk SOAR.    As for now, I am not able to see and find any app for this.  ... See more...
Hi Community, Could any of you please let me know if there is any way or pre written app to connect Azure Sentinal with Splunk SOAR.    As for now, I am not able to see and find any app for this.    Regards, Saurabh
How can i create an alarm when a location goes down?  index=internal sourcetype=abc | timechart span=5m count(linecount) AS Count by loc useother=f usenull=f | sort by _time desc | table _time l... See more...
How can i create an alarm when a location goes down?  index=internal sourcetype=abc | timechart span=5m count(linecount) AS Count by loc useother=f usenull=f | sort by _time desc | table _time loc  | where loc< 10 2022-06-22 02:15:00 0 0 0 102 949 941 967 969 45 33 2022-06-22 02:14:00 0 0 0 143 1167 1139 1146 1195 49 75 2022-06-22 02:13:00 0 0 0 134 874 827 891 876 29 46 2022-06-22 02:12:00 1 0 0 130 770 789 773 736 59 60
Hello Team, I am creating index from from Splunk Cloud REST API's, it is getting created and it is not visible to me from Splunk Cloud web. is there access issue to my account? I am having followi... See more...
Hello Team, I am creating index from from Splunk Cloud REST API's, it is getting created and it is not visible to me from Splunk Cloud web. is there access issue to my account? I am having following roles. 1)apps, 2)can_delete, 3)enable_automatic_ui_updates, 4)ite_internal_admin, 5)power, 6)sc_admin, 7)tokens_auth, 8)user Thanks, Venkata          
The AppDynamics Java Agent is one type of bytecode injection (BCI) agent.  So, what does it do with the java application? Actually from what I have seen it intercept the http request and modify it ... See more...
The AppDynamics Java Agent is one type of bytecode injection (BCI) agent.  So, what does it do with the java application? Actually from what I have seen it intercept the http request and modify it by adding the header for local proxy authentication.
I have a sub query that gives the output example below  Sub Query [ search index=prod_diamond sourcetype=CloudWatch_logs source=*downloadInvoice* AND *error* NOT ("lambda-warmer") | fields erro... See more...
I have a sub query that gives the output example below  Sub Query [ search index=prod_diamond sourcetype=CloudWatch_logs source=*downloadInvoice* AND *error* NOT ("lambda-warmer") | fields error.requestId | rename error.requestId as requestId | dedup requestId | format ] Output ( requestId="jjadjdfjjedd_jehdfjdjfhj" ) OR ( requestId="jgjfnfdn_jrhfjdbfd" ).... I need to edit the format that is returned from the first query.    Is there a way to change the search to something less specific? Such as (*jjadjdfjjedd_jehdfjdjfhj*) OR (*jgjfnfdn_jrhfjdbfd*) ..... As I need to find all events that include the requestId, not just when it is specific to that variable.
I am working producing a table that calculates the number of incidents resolved by each analyst.  What my query does is produces a table with three columns with a count of 'Class Names': Analyst ... See more...
I am working producing a table that calculates the number of incidents resolved by each analyst.  What my query does is produces a table with three columns with a count of 'Class Names': Analyst / Class Name / count What I am trying to do is produce an output with (4) columns with a count under the descrete columns and a total of the row to the right: Analyst / Class Name 1 / Class Name 2 / Total Count (of all class names)     <query> | table "Analyst" "Class Name" | stats count by Analyst "Class Name"      
Is there  way to connect with help in real time please assist
I have a Splunk server which runs both the Deployment Server and the License Master running on version 8.2.4. Due to the CVE released related to the Deployment Server, I would like to upgrade this se... See more...
I have a Splunk server which runs both the Deployment Server and the License Master running on version 8.2.4. Due to the CVE released related to the Deployment Server, I would like to upgrade this server to 9.0 and I don't expect any issues between the DS and the universal forwarders. My concern is with the License Master as this component communicates with the rest of the deployment. The latest documentation of version compatibility is found here: Configure a license master - Splunk Documentation, which is for 8.2.6 as there is no version (as of today, 6/21/2022) of this page for version 9.0. Upgrading the entire Splunk Enterprise deployment right now is not feasible (we are planning in the near future to complete it), but upgrading this one component to mitigate the vulnerability is priority. Should I expect any issues with the licensing after the upgrade of the License Master when the major versions don't match with the rest of the deployment?
Today I've seen something strange. I was preparing a small workshop for the customer and wanted to show the performance difference between index=_internal | stats count and | tstats count where... See more...
Today I've seen something strange. I was preparing a small workshop for the customer and wanted to show the performance difference between index=_internal | stats count and | tstats count where index=_internal I was completely baffled when the second search showed me (repeatedly) count of 0. If I run the search on any other splunk instance I have access to it shows me more or less the same number for both searches (of course they can differ slightly as the _internal is dynamic so a difference of few dozen entries is perfectly understandable). But this one showed 0 with tstats. Anyone encountered something like that? I didn't have time to investigate further, I hope I get some time tomorrow to look into it but I'm puzzled. To make thing more mysterious, for other indexes tstats shows proper counts. It's just the _internal index which lies that it has no events. It's a 8.2.6 clustered (both indexer cluster and shcluster) installation.
Hi All, is it possible to retrieve the (splunk soar) instance details inside a playbook? For instance when sending an email, I want to be able to tell if the playbook ran in dev or prod environme... See more...
Hi All, is it possible to retrieve the (splunk soar) instance details inside a playbook? For instance when sending an email, I want to be able to tell if the playbook ran in dev or prod environment. Is there a list of all the global environment variables?   Thanks in advance